{ "paper_id": "S16-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:25:21.400667Z" }, "title": "SemEval-2016 Task 6: Detecting Stance in Tweets", "authors": [ { "first": "Saif", "middle": [ "M" ], "last": "Mohammad", "suffix": "", "affiliation": {}, "email": "saif.mohammad@nrc-cnrc.gc.ca" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "", "affiliation": {}, "email": "svetlana.kiritchenko@nrc-cnrc.gc.ca" }, { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "", "affiliation": {}, "email": "xiaodan.zhu@nrc-cnrc.gc.ca" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "", "affiliation": {}, "email": "colin.cherry@nrc-cnrc.gc.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Here for the first time we present a shared task on detecting stance from tweets: given a tweet and a target entity (person, organization, etc.), automatic natural language systems must determine whether the tweeter is in favor of the given target, against the given target, or whether neither inference is likely. The target of interest may or may not be referred to in the tweet, and it may or may not be the target of opinion. Two tasks are proposed. Task A is a traditional supervised classification task where 70% of the annotated data for a target is used as training and the rest for testing. For Task B, we use as test data all of the instances for a new target (not used in task A) and no training data is provided. Our shared task received submissions from 19 teams for Task A and from 9 teams for Task B. The highest classification F-score obtained was 67.82 for Task A and 56.28 for Task B. However, systems found it markedly more difficult to infer stance towards the target of interest from tweets that express opinion towards another entity.", "pdf_parse": { "paper_id": "S16-1003", "_pdf_hash": "", "abstract": [ { "text": "Here for the first time we present a shared task on detecting stance from tweets: given a tweet and a target entity (person, organization, etc.), automatic natural language systems must determine whether the tweeter is in favor of the given target, against the given target, or whether neither inference is likely. The target of interest may or may not be referred to in the tweet, and it may or may not be the target of opinion. Two tasks are proposed. Task A is a traditional supervised classification task where 70% of the annotated data for a target is used as training and the rest for testing. For Task B, we use as test data all of the instances for a new target (not used in task A) and no training data is provided. Our shared task received submissions from 19 teams for Task A and from 9 teams for Task B. The highest classification F-score obtained was 67.82 for Task A and 56.28 for Task B. However, systems found it markedly more difficult to infer stance towards the target of interest from tweets that express opinion towards another entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Stance detection is the task of automatically determining from text whether the author of the text is in favor of, against, or neutral towards a proposition or target. The target may be a person, an organization, a government policy, a movement, a product, etc. For example, one can infer from Barack Obama's speeches that he is in favor of stricter gun laws in the US. Similarly, people often express stance towards various target entities through posts on online forums, blogs, Twitter, Youtube, Instagram, etc. Automatically detecting stance has widespread applications in information retrieval, text summarization, and textual entailment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task we explore is formulated as follows: given a tweet text and a target entity (person, organization, movement, policy, etc.) , automatic natural language systems must determine whether the tweeter is in favor of the given target, against the given target, or whether neither inference is likely. For example, consider the target-tweet pair:", "cite_spans": [ { "start": 85, "end": 131, "text": "(person, organization, movement, policy, etc.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Target: legalization of abortion", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) Tweet: The pregnant are more than walking incubators, and have rights!", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We can deduce from the tweet that the tweeter is likely in favor of the target. 1 We annotated 4870 English tweets for stance towards six commonly known targets in the United States. The data corresponding to five of the targets ('Atheism', 'Climate Change is a Real Concern', 'Feminist Movement', 'Hillary Clinton', and 'Legalization of Abortion') was used in a standard supervised stance detection task -Task A. About 70% of the tweets per target were used for training and the remaining for testing. All of the data corresponding to the target 'Donald Trump' was used as test set in a separate task -Task B. No training data labeled with stance towards 'Donald Trump' was provided. However, participants were free to use data from Task A to develop their models for Task B.", "cite_spans": [ { "start": 80, "end": 81, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Task A received submissions from 19 teams, wherein the highest classification F-score obtained was 67.82. Task B, which is particularly challenging due to lack of training data, received submissions from 9 teams wherein the highest classification F-score obtained was 56.28. The best performing systems used standard text classification features such as those drawn from n-grams, word vectors, and sentiment lexicons. Some teams drew additional gains from noisy stance-labeled data created using distant supervision techniques. A large number of teams used word embeddings and some used deep neural networks such as RNNs and convolutional neural nets. Nonetheless, for Task A, none of these systems surpassed a baseline SVM classifier that uses word and character n-grams as features (Mohammad et al., 2016b) . Further, results are markedly worse for instances where the target of interest is not the target of opinion.", "cite_spans": [ { "start": 784, "end": 808, "text": "(Mohammad et al., 2016b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More gains can be expected in the future on both tasks, as researchers better understand this new task and data. All of the data, an interactive visualization of the data, and the evaluation scripts are available on the task website as well as the homepage for this Stance project. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the sub-sections below we discuss some of the nuances of stance detection, including a discussion on neutral stance and the relationship between stance and sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subtleties of Stance Detection", "sec_num": "2" }, { "text": "The classification task formulated here does not include an explicit neutral class. The lack of evidence for 'favor' or 'against' does not imply that the tweeter is neutral towards the target. It may just be that one cannot deduce stance from the tweet. In fact, this is fairly common. On the other hand, the number of tweets from which we can infer neutral stance is expected to be small. An example is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral Stance", "sec_num": "2.1" }, { "text": "Target: Hillary Clinton", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral Stance", "sec_num": "2.1" }, { "text": "(2) Tweet: Hillary Clinton has some strengths and some weaknesses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral Stance", "sec_num": "2.1" }, { "text": "Thus, even though we obtain annotations for neutral stance, we eventually merge all classes other than 'favor' and 'against' into one 'neither' class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neutral Stance", "sec_num": "2.1" }, { "text": "Stance detection is related to, but different from, sentiment analysis. Sentiment analysis tasks are usually formulated as: determining whether a piece of text is positive, negative, or neutral, OR determining from text the speaker's opinion and the target of the opinion (the entity towards which opinion is expressed). However, in stance detection, systems are to determine favorability towards a given (prechosen) target of interest. The target of interest may not be explicitly mentioned in the text and it may not be the target of opinion in the text. For example, consider the target-tweet pair below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance and Sentiment", "sec_num": "2.2" }, { "text": "Target: Donald Trump (3) Tweet: Jeb Bush is the only sane candidate in this republican lineup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance and Sentiment", "sec_num": "2.2" }, { "text": "The target of opinion in the tweet is Jeb Bush, but the given target of interest is Donald Trump. Nonetheless, we can infer that the tweeter is likely to be unfavorable towards Donald Trump. Also note that in stance detection, the target can be expressed in different ways which impacts whether the instance is labeled favour or against. For example, the target in example 1 could have been phrased as 'pro-life movement', in which case the correct label for that instance is 'against'. Also, the same stance (favour or against) towards a given target can be deduced from positive tweets and negative tweets. See Mohammad et al. (2016b) for a quantitative exploration of this interaction between stance and sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance and Sentiment", "sec_num": "2.2" }, { "text": "The stance annotations we use are described in detail in Mohammad et al. (2016a) . The same dataset was subsequently also annotated for target of opinion and sentiment (in addition to stance towards a given target) (Mohammad et al., 2016b) . These additional annotations are not part of the SemEval-2016 competition, but are made available for future research. We summarize below all relevant details for this shared task: how we compiled a set of tweets and targets for stance annotation (Section 3.1), the questionnaire and crowdsourcing setup used for stance annotation (Section 3.2), and an analysis of the stance annotations (Section 3.3).", "cite_spans": [ { "start": 57, "end": 80, "text": "Mohammad et al. (2016a)", "ref_id": null }, { "start": 215, "end": 239, "text": "(Mohammad et al., 2016b)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "A Dataset for Stance from Tweets", "sec_num": "3" }, { "text": "We wanted to create a dataset of stance-labeled tweet-target pairs with the following properties: 1: The tweet and target are commonly understood by a wide number of people in the US. (The data was also eventually annotated for stance by respondents living in the US.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "2: There must be a significant amount of data for the three classes: favor, against, and neither.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "3: Apart from tweets that explicitly mention the target, the dataset should include a significant number of tweets that express opinion towards the target without referring to it by name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "4: Apart from tweets that express opinion towards the target, the dataset should include a significant number of tweets in which the target of opinion is different from the given target of interest. Downstream applications often require stance towards particular pre-chosen targets of interest (for example, a company might be interested in stance towards its product). Having data where the target of opinion is some other entity (for example, a competitor's product) helps test how well stance detection systems can cope with such instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "To help with Property 1, the authors of this paper compiled a list of target entities commonly known in the United States. (See Table 1 for the list.)", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 135, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "We created a small list of hashtags, which we will call query hashtags, that people use when tweeting about the targets. We split these hashtags into three categories: (1) favor hashtags: expected to occur in tweets expressing favorable stance towards the target (for example, #Hillary4President), (2) against hashtags: expected to occur in tweets expressing opposition to the target (for example, #HillNo), and (3) stance-ambiguous hashtags: expected to occur in tweets about the target, but are not explicitly indicative of stance (for example, #Hillary2016). Next, we polled the Twitter API to collect over two million tweets containing these query hashtags. We discarded retweets and tweets with URLs. We kept only those tweets where the query hashtags appeared at the end. We removed the query hashtags from the tweets to exclude obvious cues for the classification task. Since we only select tweets that have the query hashtag at the end, removing them from the tweet often still results in text that is understandable and grammatical.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "Note that the presence of a stance-indicative hashtag is not a guarantee that the tweet will have the same stance. 3 Further, removal of query hashtags may result in a tweet that no longer expresses the same stance as with the query hashtag. Thus we manually annotate the tweet-target pairs after the pre-processing described above. For each target, we sampled an equal number of tweets pertaining to the favor hashtags, the against hashtags, and the stanceambiguous hashtags-up to 1000 tweets at most per target. This helps in obtaining a sufficient number of tweets pertaining to each of the stance categories (Property 2). Properties 3 and 4 are addressed to some extent by the fact that removing the query hashtag can sometimes result in tweets that do not explicitly mention the target. Consider:", "cite_spans": [ { "start": 115, "end": 116, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "Target: Hillary Clinton (4) Tweet: Benghazi must be answered for #Jeb16", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "The query hashtags '#HillNo' was removed from the original tweet, leaving no mention of Hillary Clinton. Yet there is sufficient evidence (through references to Benghazi and #Jeb16) that the tweeter is likely against Hillary Clinton. Further, conceptual targets such as 'legalization of abortion' (much more so than person-name targets) have many instances where the target is not explicitly mentioned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting the Tweet-Target Pairs", "sec_num": "3.1" }, { "text": "The core instructions given to annotators for determining stance are shown below. 4 Additional descriptions within each option (not shown here) make clear that stance can be expressed in many different ways, for example by explicitly supporting or opposing the target, by supporting an entity aligned with or opposed to the target, by re-tweeting somebody else's tweet, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stance Annotation", "sec_num": "3.2" }, { "text": "Tweet: [tweet with query hashtag removed] Q: From reading the tweet, which of the options below is most likely to be true about the tweeter's stance or outlook towards the target:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target of Interest: [target entity]", "sec_num": null }, { "text": "1. We can infer from the tweet that the tweeter supports the target 2. We can infer from the tweet that the tweeter is against the target 3. We can infer from the tweet that the tweeter has a neutral stance towards the target 4. There is no clue in the tweet to reveal the stance of the tweeter towards the target (support/against/neutral)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target of Interest: [target entity]", "sec_num": null }, { "text": "Each of the tweet-target pairs selected for annotation was uploaded on CrowdFlower for annotation with the questionnaire shown above. 5 Each instance was annotated by at least eight respondents.", "cite_spans": [ { "start": 134, "end": 135, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Target of Interest: [target entity]", "sec_num": null }, { "text": "The number of instances that were marked as neutral stance (option 3) was less than 1%. Thus we merged options 3 and 4 into one 'neither in favor nor against' option ('neither' for short). The interannotator agreement was 73.1%. These statistics are for the complete annotated dataset, which include instances that were genuinely difficult to annotate for stance (possibly because the tweets were too ungrammatical or vague) and/or instances that received poor annotations from the crowd workers (possibly because the particular annotator did not understand the tweet or its context). We selected instances with agreement equal to or greater than 60% (at least 5 out of 8 annotators must agree) to create the test and training sets for this task. 6 We will refer to this dataset as the Stance Dataset. The inter-annotator agreement on this set is 81.85%. The rest of the instances are kept aside for future investigation. We partitioned the Stance Dataset into training and test sets based on the timestamps of the tweets. All annotated tweets were ordered by their timestamps, and the first 70% of the tweets formed the training set and the last 30% formed the test set. Table 1 shows the number and distribution of instances in the Stance Dataset. Inspection of the data revealed that often the target is not directly mentioned, and yet stance towards the target was determined by the annotators. About 30% of the 'Hillary Clinton' instances and 65% of the 'Legalization of Abortion' instances were found to be of this kind-they did not mention 'Hillary' or 'Clinton' and did not mention 'abortion', 'pro-life', and 'pro-choice', respectively (case insensitive; with or without hashtag; with or without hyphen). Examples (1) and (4) shown earlier are instances of this, and are taken from our dataset. An interactive visualization of the Stance Dataset that shows various statistics about the data is available at the task website. Note that it also shows sentiment and target of opinion annotations (in addition to stance). Clicking on various visualization elements filters the data. For example, clicking on 'Feminism' and 'Favor' will show information pertaining to only those tweets that express favor towards feminism. One can also use the check boxes on the left to view only test or training data, or data on particular targets.", "cite_spans": [ { "start": 747, "end": 748, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 1172, "end": 1179, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Analysis of Stance Annotations", "sec_num": "3.3" }, { "text": "The Stance Dataset was partitioned so as to be used in two tasks described in the subsections below: Task A (supervised framework) and Task B (weakly supervised framework). Participants could provide submissions for either one of the tasks, or both tasks. Both tasks required classification of tweettarget pairs into exactly one of three classes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Setup: Automatic Stance Classification", "sec_num": "4" }, { "text": "\u2022 Favor: We can infer from the tweet that the tweeter supports the target (e.g., directly or indirectly by supporting someone/something, by opposing or criticizing someone/something opposed to the target, or by echoing the stance of somebody else).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Setup: Automatic Stance Classification", "sec_num": "4" }, { "text": "\u2022 Against: We can infer from the tweet that the tweeter is against the target (e.g., directly or indirectly by opposing or criticizing someone/something, by supporting someone/something opposed to the target, or by echoing the stance of somebody else).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Setup: Automatic Stance Classification", "sec_num": "4" }, { "text": "\u2022 Neither: none of the above. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Setup: Automatic Stance Classification", "sec_num": "4" }, { "text": "This task tested stance towards one target 'Donald Trump' in 707 tweets. Participants were not provided with any training data for this target. They were given about 78,000 tweets associated with 'Donald Trump' to various degrees -the domain corpus, but these tweets were not labeled for stance. These tweets were gathered by polling Twitter for hashtags associated with Donald Trump.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task B: Weakly Supervised Framework", "sec_num": "4.2" }, { "text": "We used the macro-average of the F1-score for 'favor' and the F1-score for 'against' as the bottom-line evaluation metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F avg = F f avor + F against 2", "eq_num": "(1)" } ], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "where F f avor and F against are calculated as shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "F f avor = 2P f avor R f avor P f avor +R f avor", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F against = 2P against R against P against +R against", "eq_num": "(3)" } ], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "Note that the evaluation measure does not disregard the 'neither' class. By taking the average F-score for only the 'favor' and 'against' classes, we treat 'neither' as a class that is not of interest-or 'negative' class in Information Retrieval (IR) terms. Falsely labeling negative class instances still adversely affects the scores of this metric. If one uses simple accuracy as the evaluation metric, and if the negative class is very dominant (as is the case in IR), then simply labeling every instance with the negative class will obtain very high scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "If one randomly accesses tweets, then the probability that one can infer 'favor' or 'against' stance towards a pre-chosen target of interest is small. This has motivated the IR-like metric used in this competition, even though we worked hard to have marked amounts of 'favor' and 'against' data in our training and test sets. This metric is also similar to how sentiment prediction was evaluated in recent SemEval competitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "This evaluation metric can be seen as a microaverage of F-scores across targets (F-microT). Alternatively, one could determine the mean of the F avg scores for each of the targets-the macro average across targets (F-macroT). Even though not the official competition metric, the F-macroT can easily be determined from the per-target F avg scores shown in the result tables of Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "The participants were provided with an evaluation script so that they could check the format of their submission and determine performance when gold labels were available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Common Evaluation Metric for Both Task A and Task B", "sec_num": "4.3" }, { "text": "We now discuss various baseline systems and the official submissions to Task A. Table 2 presents the results obtained with several baseline classifiers first presented in (Mohammad et al., 2016b) . Since the baseline system was developed by some of the organizers of this task, it was not entered as part of the official competition. Baselines:", "cite_spans": [ { "start": 171, "end": 195, "text": "(Mohammad et al., 2016b)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 80, "end": 87, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Systems and Results for Task A", "sec_num": "5" }, { "text": "1. Majority class: a classifier that simply labels every instance with the majority class ('favor' or 'against') for the corresponding target;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A Baselines", "sec_num": "5.1" }, { "text": "2. SVM-unigrams: five SVM classifiers (one per target) trained on the corresponding training set for the target using word unigram features;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A Baselines", "sec_num": "5.1" }, { "text": "3. SVM-ngrams: five SVM classifiers (one per target) trained on the corresponding training set for the target using word n-grams (1-, 2-, and 3-gram) and character n-grams (2-, 3-, 4-, and 5-gram) features;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A Baselines", "sec_num": "5.1" }, { "text": "4. SVM-ngrams-comb: one SVM classifier trained on the combined (all 5 targets) training set using word n-grams (1-, 2-, and 3-gram) and character n-grams (2-, 3-, 4-, and 5-gram) features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A Baselines", "sec_num": "5.1" }, { "text": "The SVM parameters were tuned using 5-fold crossvalidation on the training data. The first three columns of the table show the official competition metric (Overall F avg ) along with the two components that are averaged to obtain it (F f avor and F against ). The next five columns describe per-target results-the official metric as calculated over each of the targets individually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A Baselines", "sec_num": "5.1" }, { "text": "Observe that the Overall F avg for the Majority class baseline is very high. This is mostly due to the differences in the class distributions for the five targets: for most of the targets the majority of the instances are labeled as 'against' whereas for target 'Climate Change is a Real Concern' most of the data are labeled as 'favor'. Therefore, the F-scores for the classes 'favor' and 'against' are more balanced over all targets than for just one target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A Baselines", "sec_num": "5.1" }, { "text": "We can see that a supervised classifier using unigram features alone produces results markedly above the majority baseline for most of the targets. Furthermore, employing higher-order n-gram features results in substantial improvements for all targets as well as for the Overall F avg . Training separate classifiers for each target seems a better solution than training a single classifier for all targets even though the combined classifier has access to significantly more data. As expected, the words and concepts used in tweets corresponding to the stance categories do not generalize well across the targets. However, there is one exception: the results for 'Climate Change' improve by over 5% when the combined classifier has access to the training data for other targets. This is probably because it has access to more balanced dataset and more representative instances for 'against' class. Most teams chose to train separate classifiers for different targets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A Baselines", "sec_num": "5.1" }, { "text": "Nineteen teams competed in Task A on supervised stance detection. Table 2 shows each team's performance, both in aggregate and in terms of individual targets. Teams are sorted in terms of their performance according to the official metric. The best results obtained by a participating system was an Overall F avg of 67.82 by MITRE. Their approach employed two recurrent neural network (RNN) classifiers: the first was trained to predict task-relevant hashtags on a very large unlabeled Twitter corpus. This network was used to initialize a second RNN classifier, which was trained with the provided Task A data. However, this result is not higher than the SVM-ngrams baseline.", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 73, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Task A Participating Stance Systems", "sec_num": "5.2" }, { "text": "In general, per-target results are lower than the Overall F avg . This is likely due to the fact that it is easier to balance 'favor' and 'against' classes over all targets than it is for exactly one target. That is, when dealing with all targets, one can use the natural abundance of tweets in favor of concern over climate change to balance against the fact that many of the other targets have a high proportion of tweets against them. Most systems were optimized for the competition metric, which allows cross-target balancing, and thus would naturally perform worse on per-target metrics. IDI@NTNU is an interesting exception, as their submission focused on the 'Climate Change' target, and they did succeed in producing the best result for that target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task A Participating Stance Systems", "sec_num": "5.2" }, { "text": "We also calculated Task A results on two subsets of the test set: (1) a subset where opinion is expressed towards the target, (2) a subset where opinion is expressed towards some other entity. Table 3 shows these results. It also shows results on the complete test set (All), for easy reference. Observe that the stance task is markedly more difficult when stance is to be inferred from a tweet expressing opinion about some other entity (and not the target of interest). This is not surprising because it is a more challenging task, and because there has been very little work on this in the past.", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 201, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Task A Participating Stance Systems", "sec_num": "5.2" }, { "text": "Most teams used standard text classification features such as n-grams and word embedding vectors, as well as standard sentiment analysis features such as those drawn from sentiment lexicons (Kiritchenko et al., 2014b) . Some teams polled Twitter for stancebearing hashtags, creating additional noisy stance data. Three teams tried variants of this strategy: MITRE, DeepStance and nldsucsc. These teams are distributed somewhat evenly throughout the standings, and although MITRE did use extra data in its top-placing entry, pkudblab achieved nearly the same score with only the provided data.", "cite_spans": [ { "start": 190, "end": 217, "text": "(Kiritchenko et al., 2014b)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "Another possible differentiator would be the use of continuous word representations, derived either from extremely large sources such as Google News, directly from Twitter corpora, or as a by-product of training a neural network classifier. Nine of the nineteen entries used some form of word embedding, including the top three entries, but PKULCWM's fourth place result shows that it is possible to do well with a more traditional approach that relies instead on Twitter-specific linguistic pre-processing. Along these lines, it is worth noting that both MITRE and pkudblab reflect knowledge-light approaches to the problem, each relying minimally on linguistic processing and external lexicons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "Seven of the nineteen submissions made extensive use of publicly-available sentiment and emotion lexicons such as the NRC Emotion Lexicon (Mohammad and Turney, 2010), Hu and Liu Lexicon (Hu and Liu, 2004) , MPQA Subjectivity Lexicon (Wilson et al., 2005) , and NRC Hashtag Lexicons (Kiritchenko et al., 2014b) .", "cite_spans": [ { "start": 167, "end": 204, "text": "Hu and Liu Lexicon (Hu and Liu, 2004)", "ref_id": null }, { "start": 233, "end": 254, "text": "(Wilson et al., 2005)", "ref_id": "BIBREF29" }, { "start": 282, "end": 309, "text": "(Kiritchenko et al., 2014b)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "Recall that the SVM-ngrams baseline also performed very well, using only word and character ngrams in its classifiers. This helps emphasize the fact that for this young task, the community is still a long way from an established set of best practices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "The sub-sections below discuss baselines and official submissions to Task B. Recall, that the test data for Task B is for the target 'Donald Trump', and no training data for this target was provided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems and Results for Task B", "sec_num": "6" }, { "text": "We calculated two baselines listed below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task B Baselines", "sec_num": "6.1" }, { "text": "1. Majority class: a classifier that simply labels every instance with the majority class ('favor' or 'against') for the corresponding target; 2. SVM-ngrams-comb: one SVM classifier trained on the combined (all 5 targets) Task A training set, using word n-grams (1-, 2-, and 3-gram) and character n-grams (2-, 3-, 4-, and 5-gram) features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task B Baselines", "sec_num": "6.1" }, { "text": "The results are presented in Table 4 . Note that the class distribution for the target 'Donald Trump' is more balanced. Therefore, the F avg for the Majority baseline for this target is much lower than the corresponding values for other targets. Yet, the combined classifier trained on other targets could not beat the Majority baseline on this test set.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Task B Baselines", "sec_num": "6.1" }, { "text": "Nine teams competed in Task B. Table 4 shows each team's performance. Teams are sorted in terms of their performance according to the official metric. The best results obtained by a participating system was an F avg of 56.28 by pkudblab. They used a rule-based annotation of the domain corpus to train a deep convolutional neural network to differentiate 'favour' from 'against' instances. At test time, they combined their network's output with rules to produce predictions that include the 'neither' class.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 38, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Task B Participating Stance Systems", "sec_num": "6.2" }, { "text": "In general, results for Task B are lower than those for Task A as one would expect, as we remove the benefit of direct supervision. However, they are perhaps not as low as we might have expected, with the best result of 56.28 actually beating the best result for the supervised 'Climate Change' task (54.86). Table 5 shows results for Task B on subsets of the test set where opinion is expressed towards the target of interest and towards some other entity. Observe that here too results are markedly lower when stance is to be inferred from a tweet expressing opinion about some other entity (and not the target of interest).", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Task B Participating Stance Systems", "sec_num": "6.2" }, { "text": "Some teams did very well detecting tweets in favor of Trump (ltl.uni-due), with most of the others performing best on tweets against Trump. This makes sense, as 'against' tweets made up the bulk of the Trump dataset. The top team, pkudblab, was the only one to successfully balance these two goals, achieving the best F f avor score and the second-best F against score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.3" }, { "text": "The Task B teams varied wildly in terms of approaches to this problem. The top three teams all took the approach of producing noisy labels, with pkudblab using keyword rules, LitisMind using hashtag rules on external data, and INF-UFRGS using a combination of rules and third-party sentiment classifiers. However, we were pleased to see other teams attempting to generalize the supervised data from Task A in interesting ways, either using rules or multi-stage classifiers to bridge the target gap. We are optimistic that there is much interesting follow-up work yet to come on this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.3" }, { "text": "Further details on the submissions can be found in the system description papers published in the SemEval-2016 proceedings, including papers by Elfardy and Diab (2016) for CU-GWU, Dias and Becker (2016) for INF-URGS, Patra et al. (2016) for JU NLP, Wojatzki and Zesch (2016) for ltl.uni-due, Zarrella and Marsh (2016) 2014, Rajadesingan and Liu (2014) , and Sobhani et al. (2015) . There is a vast amount of work in sentiment analysis of tweets, and we refer the reader to surveys (Pang and Lee, 2008; Liu and Zhang, 2012; and proceedings of recent shared task competitions (Wilson et al., 2013; Mohammad et al., 2013; Rosenthal et al., 2015) . See Pontiki et al. (2014) , Pontiki et al. (2015) , and Kiritchenko et al. (2014a) for tasks and systems on aspect based sentiment analysis, where the goal is to determine sentiment towards aspects of a product such as speed of processor and screen resolution of a cell phone.", "cite_spans": [ { "start": 180, "end": 202, "text": "Dias and Becker (2016)", "ref_id": "BIBREF4" }, { "start": 207, "end": 236, "text": "INF-URGS, Patra et al. (2016)", "ref_id": null }, { "start": 249, "end": 274, "text": "Wojatzki and Zesch (2016)", "ref_id": "BIBREF31" }, { "start": 292, "end": 317, "text": "Zarrella and Marsh (2016)", "ref_id": "BIBREF33" }, { "start": 324, "end": 351, "text": "Rajadesingan and Liu (2014)", "ref_id": "BIBREF22" }, { "start": 358, "end": 379, "text": "Sobhani et al. (2015)", "ref_id": "BIBREF24" }, { "start": 481, "end": 501, "text": "(Pang and Lee, 2008;", "ref_id": "BIBREF18" }, { "start": 502, "end": 522, "text": "Liu and Zhang, 2012;", "ref_id": "BIBREF11" }, { "start": 574, "end": 595, "text": "(Wilson et al., 2013;", "ref_id": "BIBREF30" }, { "start": 596, "end": 618, "text": "Mohammad et al., 2013;", "ref_id": "BIBREF14" }, { "start": 619, "end": 642, "text": "Rosenthal et al., 2015)", "ref_id": "BIBREF23" }, { "start": 649, "end": 670, "text": "Pontiki et al. (2014)", "ref_id": "BIBREF20" }, { "start": 673, "end": 694, "text": "Pontiki et al. (2015)", "ref_id": "BIBREF21" }, { "start": 701, "end": 727, "text": "Kiritchenko et al. (2014a)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.3" }, { "text": "We described a new shared task on detecting stance towards pre-chosen targets of interest from tweets. We formulated two tasks: a traditional supervised task where labeled training data for the test data targets is made available (Task A) and a more challenging formulation where no labeled data pertaining to the test data targets is available (Task B). We received 19 submissions for Task A and 9 for Task B, with systems utilizing a wide array of features and resources. Stance detection, especially as formulated for Task B, is still in its infancy, and we hope that the dataset made available as part of this task will foster further research not only on stance detection as proposed here, but also for related tasks such as exploring the different ways in which stance is conveyed, and how the distribution of stance towards a target changes over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "8" }, { "text": "Note that we use 'tweet' to refer to the text of the tweet and not to its meta-information. In our annotation task, we asked respondents to label for stance towards a given target based on the tweet text alone. However, automatic systems may benefit from exploiting tweet meta-information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://alt.qcri.org/semeval2016/task6/ http://www.saifmohammad.com/WebPages/StanceDataset.htm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A tweet that has a seemingly favorable hashtag may in fact oppose the target; and this is not uncommon. Similarly unfavorable hashtags may occur in tweets that favor the target.4 The full set of instructions is made available on the shared task website (http://alt.qcri.org/semeval2016/task6/).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.crowdflower.com 6 The 60% threshold is somewhat arbitrary, but it seemed appropriate in terms of balancing quality and quantity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Annotations of the Stance Dataset were funded by the National Research Council of Canada.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Cats rule and dogs drool!: Classifying stance in online debate", "authors": [ { "first": "Pranav", "middle": [], "last": "Anand", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Abbott", "suffix": "" }, { "first": "Jean E Fox", "middle": [], "last": "Tree", "suffix": "" }, { "first": "Robeson", "middle": [], "last": "Bowmani", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Minor", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2nd workshop on computational approaches to subjectivity and sentiment analysis", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Anand, Marilyn Walker, Rob Abbott, Jean E Fox Tree, Robeson Bowmani, and Michael Minor. 2011. Cats rule and dogs drool!: Classifying stance in on- line debate. In Proceedings of the 2nd workshop on computational approaches to subjectivity and senti- ment analysis, pages 1-9.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "USFD at SemEval-2016 Task 6: Any-Target Stance Detection on Twitter with Autoencoders", "authors": [ { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isabelle Augenstein, Andreas Vlachos, and Kalina Bontcheva. 2016. USFD at SemEval-2016 Task 6: Any-Target Stance Detection on Twitter with Autoen- coders. In Proceedings of the International Workshop on Semantic Evaluation, SemEval '16, San Diego, California, June.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Back up your stance: Recognizing arguments in online discussions", "authors": [ { "first": "Filip", "middle": [], "last": "Boltuzic", "suffix": "" }, { "first": "Jan\u0161najder", "middle": [], "last": "", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "49--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filip Boltuzic and Jan\u0160najder. 2014. Back up your stance: Recognizing arguments in online discussions. In Proceedings of the First Workshop on Argumenta- tion Mining, pages 49-58.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Recognizing arguing subjectivity and argument tags", "authors": [ { "first": "Alexander", "middle": [], "last": "Conrad", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics", "volume": "", "issue": "", "pages": "80--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Conrad, Janyce Wiebe, and Rebecca Hwa. 2012. Recognizing arguing subjectivity and argu- ment tags. In Proceedings of the Workshop on Extra- Propositional Aspects of Meaning in Computational Linguistics, pages 80-88.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "INF-UFRGS-OPINION-MINING at SemEval-2016 Task 6: Automatic Generation of a Training Corpus for Unsupervised Identification of Stance in Tweets", "authors": [ { "first": "Marcelo", "middle": [], "last": "Dias", "suffix": "" }, { "first": "Karin", "middle": [], "last": "Becker", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcelo Dias and Karin Becker. 2016. INF-UFRGS- OPINION-MINING at SemEval-2016 Task 6: Auto- matic Generation of a Training Corpus for Unsuper- vised Identification of Stance in Tweets. In Proceed- ings of the International Workshop on Semantic Eval- uation, SemEval '16, San Diego, California, June.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "What does Twitter have to say about ideology?", "authors": [ { "first": "Sarah", "middle": [], "last": "Djemili", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Longhi", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Marinica", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Natural Language Processing for Computer-Mediated Communication/Social Media-Pre-conference workshop at Konvens", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Djemili, Julien Longhi, Claudia Marinica, Dim- itris Kotzinos, and Georges-Elia Sarfati. 2014. What does Twitter have to say about ideology? In Proceedings of the Natural Language Process- ing for Computer-Mediated Communication/Social Media-Pre-conference workshop at Konvens.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "CU-GWU at SemEval-2016 Task", "authors": [ { "first": "Heba", "middle": [], "last": "Elfardy", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2016, "venue": "", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heba Elfardy and Mona Diab. 2016. CU-GWU at SemEval-2016 Task 6: Perspective at SemEval-2016", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Task 6: Ideological Stance Detection in Informal Text", "authors": [], "year": null, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Task 6: Ideological Stance Detection in Informal Text. In Proceedings of the International Workshop on Se- mantic Evaluation, SemEval '16, San Diego, Califor- nia, June.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Automated classification of stance in student essays: An approach using stance target information and the Wikipedia link-based measure", "authors": [ { "first": "Adam", "middle": [], "last": "Faulkner", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Twenty-Seventh International Flairs Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Faulkner. 2014. Automated classification of stance in student essays: An approach using stance target information and the Wikipedia link-based mea- sure. In Proceedings of the Twenty-Seventh Interna- tional Flairs Conference.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Mining and summarizing customer reviews", "authors": [ { "first": "Minqing", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "168--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "NRC-Canada-2014: Detecting aspects and sentiment in customer reviews", "authors": [ { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Saif", "middle": [ "M" ], "last": "Mohammad", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '14", "volume": "50", "issue": "", "pages": "723--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif M. Mohammad. 2014a. NRC-Canada-2014: De- tecting aspects and sentiment in customer reviews. In Proceedings of the International Workshop on Seman- tic Evaluation, SemEval '14, Dublin, Ireland, August. Svetlana Kiritchenko, Xiaodan Zhu, and Saif M. Mo- hammad. 2014b. Sentiment analysis of short infor- mal texts. Journal of Artificial Intelligence Research, 50:723-762.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A survey of opinion mining and sentiment analysis", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2012, "venue": "Mining Text Data", "volume": "", "issue": "", "pages": "415--463", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Charu C. Aggar- wal and ChengXiang Zhai, editors, Mining Text Data, pages 415-463. Springer US.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "nldsucsc at SemEval-2016 Task 6: A Semi-Supervised Approach to Detecting Stance in Tweets", "authors": [ { "first": "Amita", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Ecker", "suffix": "" }, { "first": "Theodore", "middle": [], "last": "Handleman", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Hahn", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation, Se-mEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amita Misra, Brian Ecker, Theodore Handleman, Nico- las Hahn, and Marilyn Walker. 2016. nldsucsc at SemEval-2016 Task 6: A Semi-Supervised Approach to Detecting Stance in Tweets. In Proceedings of the International Workshop on Semantic Evaluation, Se- mEval '16, San Diego, California, June.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text", "volume": "", "issue": "", "pages": "26--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M Mohammad and Peter D Turney. 2010. Emo- tions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon. In Pro- ceedings of the NAACL HLT 2010 Workshop on Com- putational Approaches to Analysis and Generation of Emotion in Text, pages 26-34.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "NRC-Canada: Building the state-of-theart in sentiment analysis of tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '13", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the state-of-the- art in sentiment analysis of tweets. In Proceedings of the International Workshop on Semantic Evaluation, SemEval '13, Atlanta, Georgia, USA, June.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016a. A dataset for detecting stance in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": null, "venue": "Proceedings of 10th edition of the the Language Resources and Evaluation Conference (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016a. A dataset for detecting stance in tweets. In Proceed- ings of 10th edition of the the Language Resources and Evaluation Conference (LREC), Portoro\u017e, Slovenia.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Stance and sentiment in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2016, "venue": "Special Section of the ACM Transactions on Internet Technology on Argumentation in Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kir- itchenko. 2016b. Stance and sentiment in tweets. Spe- cial Section of the ACM Transactions on Internet Tech- nology on Argumentation in Social Media, Submitted.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Sentiment analysis: Detecting valence, emotions, and other affectual states from text", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M Mohammad. 2015. Sentiment analysis: Detect- ing valence, emotions, and other affectual states from text.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "", "volume": "2", "issue": "", "pages": "1--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Infor- mation Retrieval, 2(1-2):1-135.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "JU NLP at SemEval-2016 Task 6: Detecting Stance in Tweets using Support Vector Machines", "authors": [ { "first": "Dipankar", "middle": [], "last": "Braja Gopal Patra", "suffix": "" }, { "first": "Sivaji", "middle": [], "last": "Das", "suffix": "" }, { "first": "", "middle": [], "last": "Bandyopadhyay", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Braja Gopal Patra, Dipankar Das, and Sivaji Bandyopad- hyay. 2016. JU NLP at SemEval-2016 Task 6: De- tecting Stance in Tweets using Support Vector Ma- chines. In Proceedings of the International Workshop on Semantic Evaluation, SemEval '16, San Diego, California, June.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "John", "middle": [], "last": "Pavlopoulos", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '14", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Har- ris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 Task 4: Aspect based sentiment analysis. In Proceedings of the Inter- national Workshop on Semantic Evaluation, SemEval '14, Dublin, Ireland, August.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "SemEval-2015 Task 12: Aspect based sentiment analysis", "authors": [ { "first": "Maria", "middle": [], "last": "Pontiki", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Galanis", "suffix": "" }, { "first": "Haris", "middle": [], "last": "Papageogiou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '15", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Pontiki, Dimitrios Galanis, Haris Papageogiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 Task 12: Aspect based sentiment anal- ysis. In Proceedings of the International Workshop on Semantic Evaluation, SemEval '15, Denver, Colorado.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Identifying users with opposing opinions in Twitter debates", "authors": [ { "first": "Ashwin", "middle": [], "last": "Rajadesingan", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Conference on Social Computing, Behavioral-Cultural Modeling and Prediction", "volume": "", "issue": "", "pages": "153--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashwin Rajadesingan and Huan Liu. 2014. Identifying users with opposing opinions in Twitter debates. In Proceedings of the Conference on Social Computing, Behavioral-Cultural Modeling and Prediction, pages 153-160, Washington, DC, USA.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semeval-2015 Task 10: Sentiment analysis in Twitter", "authors": [ { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif M Mohammad, Alan Ritter, and Veselin Stoy- anov. 2015. Semeval-2015 Task 10: Sentiment analy- sis in Twitter. In Proceedings of the 9th International Workshop on Semantic Evaluations.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "From argumentation mining to stance classification", "authors": [ { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Matwin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "67--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Parinaz Sobhani, Diana Inkpen, and Stan Matwin. 2015. From argumentation mining to stance classification. In Proceedings of the Workshop on Argumentation Min- ing, pages 67-77, Denver, Colorado, USA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Recognizing stances in ideological on-line debates", "authors": [ { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text", "volume": "", "issue": "", "pages": "116--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapna Somasundaran and Janyce Wiebe. 2010. Recog- nizing stances in ideological on-line debates. In Pro- ceedings of the NAACL HLT 2010 Workshop on Com- putational Approaches to Analysis and Generation of Emotion in Text, pages 116-124.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Collective stance classification of posts in online debate forums", "authors": [ { "first": "Dhanya", "middle": [], "last": "Sridhar", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Getoor", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dhanya Sridhar, Lise Getoor, and Marilyn Walker. 2014. Collective stance classification of posts in online de- bate forums. Proceedings of the Association for Com- putational Linguistics, page 109.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "TakeLab at SemEval-2016 Task 6: Stance Classification in Tweets Using a Genetic Algorithm Based Ensemble", "authors": [ { "first": "Martin", "middle": [], "last": "Tutek", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Sekuli\u0107", "suffix": "" }, { "first": "Paula", "middle": [], "last": "Gombar", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Paljak", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Filip\u010dulinovi\u0107", "suffix": "" }, { "first": "Mladen", "middle": [], "last": "Boltu\u017ei\u0107", "suffix": "" }, { "first": "", "middle": [], "last": "Karan ; Wan", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Xuqin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Tengjiao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Tutek, Ivan Sekuli\u0107, Paula Gombar, Ivan Paljak, Filip\u010culinovi\u0107, Filip Boltu\u017ei\u0107, Mladen Karan, Do- magoj Alagi\u0107, and Jan\u0160najder. 2016. TakeLab at SemEval-2016 Task 6: Stance Classification in Tweets Using a Genetic Algorithm Based Ensemble. In Pro- ceedings of the International Workshop on Semantic Evaluation, SemEval '16, San Diego, California, June. Wan Wei, Xiao Zhang, Xuqin Liu, Wei Chen, and Tengjiao Wang. 2016. pkudblab at SemEval-2016", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A Specific Convolutional Neural Network System for Effective Stance Detection", "authors": [], "year": null, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "6", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Task 6: A Specific Convolutional Neural Network Sys- tem for Effective Stance Detection. In Proceedings of the International Workshop on Semantic Evaluation, SemEval '16, San Diego, California, June.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Recognizing contextual polarity in phrase-level sentiment analysis", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "347--354", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Meth- ods in Natural Language Processing, pages 347-354, Vancouver, British Columbia, Canada.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "SemEval-2013 Task 2: Sentiment analysis in Twitter", "authors": [ { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '13", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Theresa Wilson, Zornitsa Kozareva, Preslav Nakov, Sara Rosenthal, Veselin Stoyanov, and Alan Ritter. 2013. SemEval-2013 Task 2: Sentiment analysis in Twit- ter. In Proceedings of the International Workshop on Semantic Evaluation, SemEval '13, Atlanta, Georgia, USA, June.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "ltl.uni-due at SemEval-2016 Task 6: Stance Detection in Social Media Using Stacked Classifiers", "authors": [ { "first": "Michael", "middle": [], "last": "Wojatzki", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Zesch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wojatzki and Torsten Zesch. 2016. ltl.uni-due at SemEval-2016 Task 6: Stance Detection in Social Media Using Stacked Classifiers. In Proceedings of the International Workshop on Semantic Evaluation, SemEval '16, San Diego, California, June.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Tohoku at SemEval-2016 Task 6: Feature-based Model versus Convolutional Neural Network for Stance Detection", "authors": [ { "first": "Igarashi", "middle": [], "last": "Yuki", "suffix": "" }, { "first": "Komatsu", "middle": [], "last": "Hiroya", "suffix": "" }, { "first": "Kobayashi", "middle": [], "last": "Sosuke", "suffix": "" }, { "first": "Okazaki", "middle": [], "last": "Naoaki", "suffix": "" }, { "first": "Inui", "middle": [], "last": "Kentaro", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igarashi Yuki, Komatsu Hiroya, Kobayashi Sosuke, Okazaki Naoaki, and Inui Kentaro. 2016. Tohoku at SemEval-2016 Task 6: Feature-based Model versus Convolutional Neural Network for Stance Detection. In Proceedings of the International Workshop on Se- mantic Evaluation, SemEval '16, San Diego, Califor- nia, June.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "MITRE at SemEval-2016 Task 6: Transfer Learning for Stance Detection", "authors": [ { "first": "Guido", "middle": [], "last": "Zarrella", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Marsh", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '16", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guido Zarrella and Amy Marsh. 2016. MITRE at SemEval-2016 Task 6: Transfer Learning for Stance Detection. In Proceedings of the International Work- shop on Semantic Evaluation, SemEval '16, San Diego, California, June.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "for MITRE, Misra et al. (2016) for nldsucsc, Wei et al. (2016) for pkudblab, Tutek et al. (2016) for TakeLab, Yuki et al. (2016) for Tohoku, and Augenstein et al. (2016) for USFD. 7 Related Work Past work on stance detection includes that by Somasundaran and Wiebe (2010), Anand et al. (2011), Faulkner (2014), Rajadesingan and Liu (2014), Djemili et al. (2014), Boltuzic and\u0160najder (2014), Conrad et al. (2012), Sridhar et al." }, "TABREF0": { "num": null, "html": null, "text": "Distribution of instances in the Stance Train and Test sets for Task A and Task B. with 2,914 labeled training data instances for the five targets. The test data included 1,249 instances.", "type_str": "table", "content": "
Target Data for Task A% of instances in Train # total # train favor against neither # test favor against neither % of instances in Test
Atheism Climate Change is Concern Feminist Movement Hillary Clinton Legalization of Abortion All Data for Task B733 564 949 984 933 4163513 395 664 689 653 291417.9 53.7 31.6 17.1 18.5 25.859.3 3.8 49.4 57.0 54.4 47.922.8 42.5 19.0 25.8 27.1 26.3 1249 220 169 285 295 28014.5 72.8 20.4 15.3 16.4 24.372.7 6.5 64.2 58.3 67.5 57.312.7 20.7 15.4 26.4 16.1 18.4
Donald Trump7070---707 20.9342.2936.78
" }, "TABREF2": { "num": null, "html": null, "text": "Results for Task A, reporting the official competition metric as 'Overall Favg', along with F f avor and Fagainst over all targets and Favg for each individual target. The highest scores in each column among the baselines and among the participating systems are shown in bold.", "type_str": "table", "content": "" }, "TABREF4": { "num": null, "html": null, "text": "Results for Task A (the official competition metric Favg) on different subsets of the test data. The highest scores in each column among the baselines and among the participating systems are shown in bold.", "type_str": "table", "content": "
" }, "TABREF6": { "num": null, "html": null, "text": "Results for Task B, reporting the official competition metric as Favg, along with F f avor and Fagainst. The highest score in each column is shown in bold.", "type_str": "table", "content": "
" }, "TABREF8": { "num": null, "html": null, "text": "Results for Task B (the official competition metricFavg) on different subsets of the test data. The highest score in each column is shown in bold.", "type_str": "table", "content": "
" } } } }