{ "paper_id": "P18-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:42:35.469765Z" }, "title": "Retrieval of the Best Counterargument without Prior Topic Knowledge", "authors": [ { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "", "affiliation": {}, "email": "henningw@upb.de" }, { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Given any argument on any controversial topic, how to counter it? This question implies the challenging retrieval task of finding the best counterargument. Since prior knowledge of a topic cannot be expected in general, we hypothesize the best counterargument to invoke the same aspects as the argument while having the opposite stance. To operationalize our hypothesis, we simultaneously model the similarity and dissimilarity of pairs of arguments, based on the words and embeddings of the arguments' premises and conclusions. A salient property of our model is its independence from the topic at hand, i.e., it applies to arbitrary arguments. We evaluate different model variations on millions of argument pairs derived from the web portal idebate.org. Systematic ranking experiments suggest that our hypothesis is true for many arguments: For 7.6 candidates with opposing stance on average, we rank the best counterargument highest with 60% accuracy. Even among all 2801 test set pairs as candidates, we still find the best one about every third time.", "pdf_parse": { "paper_id": "P18-1023", "_pdf_hash": "", "abstract": [ { "text": "Given any argument on any controversial topic, how to counter it? This question implies the challenging retrieval task of finding the best counterargument. Since prior knowledge of a topic cannot be expected in general, we hypothesize the best counterargument to invoke the same aspects as the argument while having the opposite stance. To operationalize our hypothesis, we simultaneously model the similarity and dissimilarity of pairs of arguments, based on the words and embeddings of the arguments' premises and conclusions. A salient property of our model is its independence from the topic at hand, i.e., it applies to arbitrary arguments. We evaluate different model variations on millions of argument pairs derived from the web portal idebate.org. Systematic ranking experiments suggest that our hypothesis is true for many arguments: For 7.6 candidates with opposing stance on average, we rank the best counterargument highest with 60% accuracy. Even among all 2801 test set pairs as candidates, we still find the best one about every third time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many controversial topics in real life divide us into opposing camps, such as whether to ban guns, who should become president, or what phone to buy. When being confronted with arguments against our stance, but also when forming own arguments, we need to think about how they could best be countered. Argumentation theory tells us that -aside from ad-hominem attacks -a counterargument denies either an argument's premises, its conclusion, or the reasoning between them (Walton, 2009) . Take the following argument in favor of the right to bear arms from the web portal idebate.org:", "cite_spans": [ { "start": 470, "end": 484, "text": "(Walton, 2009)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Argument \"Gun ownership is an integral aspect of the right to self defence. (conclusion) Law-abiding citizens deserve the right to protect their families in their own homes, especially if the police are judged incapable of dealing with the threat of attack. [...] \" (premise) While the conclusion seems well-reasoned, the web portal directly provides a counter to the argument:", "cite_spans": [ { "start": 258, "end": 263, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Counterargument \"Burglary should not be punished by vigilante killings of the offender. No amount of property is worth a human life. Perversely, the danger of attack by homeowners may make it more likely that criminals will carry their own weapons. If a right to self-defence is granted in this way, many accidental deaths are bound to result. [...] ", "cite_spans": [ { "start": 344, "end": 349, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As in this example, we observe that a counterargument often takes on the aspects of the topic invoked by the argument, while adding a new perspective to its conclusion and/or premises, conveying the opposite stance. Research has tackled the stance of argument units (Bar-Haim et al., 2017) as well as the attack relations between arguments (Cabrio and Villata, 2012) . However, existing approaches learn the interplay of aspects and topics on training data or infer it from external knowledge bases (details in Section 2). This does not work for topics unseen before. Moreover, to our knowledge, no work so far aims at actual counterarguments. This paper studies the task of automatically finding the best counterargument to any argument. In the general case, we cannot expect prior knowledge of an argument's topic. Following the observation above, we thus just hypothesize the best counterargument to invoke the same aspects as the argument while having the opposite stance. Figure 1 sketches how we operationalize the hypothesis. In particular, we simultaneously model the topic similarity and stance dissimilarity of a candidate counterargument to the argument. Both are inferred -in different ways -from the similarities to the argument's conclusion and premises, since it is unclear in advance, whether either of these units or the reasoning between them is countered. Thereby, we find the most dissimilar among the most similar arguments.", "cite_spans": [ { "start": 266, "end": 289, "text": "(Bar-Haim et al., 2017)", "ref_id": "BIBREF3" }, { "start": 340, "end": 366, "text": "(Cabrio and Villata, 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 977, "end": 985, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "\"", "sec_num": null }, { "text": "To study counteraguments, we provide a new corpus with 6753 argument-counterargument pairs, taken from 1069 debates on idebate.org, as well as millions of false pairs derived from them. Given the corpus, we define eight retrieval tasks that differ in the types of candidate counterarguments. Based on the words and embeddings of the arguments, we develop similarity functions that realize the outlined model as a ranking approach. In systematic experiments, we evaluate the different building blocks of our model on all defined tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"", "sec_num": null }, { "text": "The results suggest that our hypothesis is true for many arguments. The best model configuration improves common word and embedding similarity measures by eight to ten points accuracy in all tasks. Inter alia, we rank 60.3% of the best counterarguments highest when given all arguments with opposite stance (7.6 on average). Even with all 2801 test arguments as candidates, we still achieve 32.4% (and a mean rank of 15), fitting the intuition that offtopic arguments are easier to discard. Our analysis reveals notable gaps across topical themes though.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"", "sec_num": null }, { "text": "Contributions We believe that our findings will be important for applications such as automatic debating technologies (Rinott et al., 2015) and argument search (Wachsmuth et al., 2017b) . To summarize, our main contributions are:", "cite_spans": [ { "start": 118, "end": 139, "text": "(Rinott et al., 2015)", "ref_id": "BIBREF26" }, { "start": 160, "end": 185, "text": "(Wachsmuth et al., 2017b)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "\"", "sec_num": null }, { "text": "\u2022 A large corpus for studying multiple counterargument retrieval tasks (Sections 3 and 4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"", "sec_num": null }, { "text": "\u2022 A topic-independent approach to find the best counterargument to any argument (Section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"", "sec_num": null }, { "text": "\u2022 Evidence that many counterarguments can be found without topic knowledge (Section 6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"", "sec_num": null }, { "text": "The corpus as well as the Java source code for reproducing the experiments are available at http: //www.arguana.com.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\"", "sec_num": null }, { "text": "Counterarguments rebut arguments. In the theoretical model of Toulmin (1958) , a rebuttal in fact does not attack the argument, but it merely shows exceptions to the argument's reasoning. Govier (2010) suggests to rather speak of counterconsiderations in such cases. Unlike Damer (2009) , who investigates how to attack several kinds of fallacies, we are interested in how to identify attacks. We focus on those that target arguments, excluding personal (ad-hominem) attacks (Habernal et al., 2018) .", "cite_spans": [ { "start": 62, "end": 76, "text": "Toulmin (1958)", "ref_id": "BIBREF32" }, { "start": 188, "end": 201, "text": "Govier (2010)", "ref_id": "BIBREF13" }, { "start": 274, "end": 286, "text": "Damer (2009)", "ref_id": "BIBREF11" }, { "start": 475, "end": 498, "text": "(Habernal et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Following Walton (2006) , an argument can be attacked in two ways: one is to question its validity -not meaning that its conclusion must be wrong. The other is to rebut it with a counterargument that entails the opposite conclusion, often by revisiting aspects or introducing new ones. This is the type of attack we study. As Walton (2009) details, rebuttals may target an argument's premises or conclusion, or they may undercut the reasoning between them.", "cite_spans": [ { "start": 10, "end": 23, "text": "Walton (2006)", "ref_id": "BIBREF36" }, { "start": 326, "end": 339, "text": "Walton (2009)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently, the computational analysis of natural language argumentation is receiving much attention. Most research focuses on argument mining, ranging from segmenting a text into argument units (Ajjour et al., 2017) , over identifying unit types (Rinott et al., 2015) and roles (Niculae et al., 2017) , to classifying argument schemes (Feng and Hirst, 2011) and relations (Lawrence and Reed, 2017) . Some works detect counterconsiderations in a text (Peldszus and Stede, 2015) or their absence (Stab and Gurevych, 2016) . Such considerations make arguments more balanced (see above). In contrast, we seek for arguments that defeat others.", "cite_spans": [ { "start": 193, "end": 214, "text": "(Ajjour et al., 2017)", "ref_id": "BIBREF0" }, { "start": 245, "end": 266, "text": "(Rinott et al., 2015)", "ref_id": "BIBREF26" }, { "start": 277, "end": 299, "text": "(Niculae et al., 2017)", "ref_id": "BIBREF23" }, { "start": 371, "end": 396, "text": "(Lawrence and Reed, 2017)", "ref_id": "BIBREF19" }, { "start": 449, "end": 475, "text": "(Peldszus and Stede, 2015)", "ref_id": "BIBREF24" }, { "start": 493, "end": 518, "text": "(Stab and Gurevych, 2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Many approaches mine attack relations between arguments. Some use deep learning to find attacks in discussions (Cocarascu and Toni, 2017) . Closer to this paper, others determine them in a given set of arguments, using textual entailment (Cabrio and Villata, 2012) or a combination of markov logic and stance classification (Hou and Jochim, 2017) . In principle, any attacking argument denotes a counterargument. Unlike previous work, however, we aim for the best counterargument to an argument.", "cite_spans": [ { "start": 111, "end": 137, "text": "(Cocarascu and Toni, 2017)", "ref_id": "BIBREF10" }, { "start": 238, "end": 264, "text": "(Cabrio and Villata, 2012)", "ref_id": "BIBREF8" }, { "start": 324, "end": 346, "text": "(Hou and Jochim, 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Classifying the stance of a text towards a topic (pro or con) generally defines an alternative way of addressing counterarguments. Sobhani et al. (2015) specifically classify health-related arguments using supervised learning, while we do not expect to have prior topic knowledge. Bar-Haim et al. (2017) approach the stance of claims towards open-domain topics. Their approach combines aspect-based sentiment with external relations between aspects and topics from Wikipedia. As such, it is in fact limited to the topics covered there. Our model applies to arbitrary arguments and counterarguments.", "cite_spans": [ { "start": 131, "end": 152, "text": "Sobhani et al. (2015)", "ref_id": "BIBREF28" }, { "start": 281, "end": 303, "text": "Bar-Haim et al. (2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We need to identify only whether arguments oppose each other, not their actual stance. Similarly, Menini et al. (2017) classify only the disagreement of political texts. Part of their approach is to detect topical key aspects in an unsupervised manner, which seems useful for our purposes. Analogously, Beigman Klebanov et al. (2010) study differences in vocabulary choice for the related task of perspective classification, and Tan et al. (2016) find that the best way to persuade opinion holders in the Change my view forum on reddit.com is to use dissimilar words. As we report later, however, our experiments did not show such results for the argument-counterargument pairs we deal with.", "cite_spans": [ { "start": 98, "end": 118, "text": "Menini et al. (2017)", "ref_id": "BIBREF20" }, { "start": 429, "end": 446, "text": "Tan et al. (2016)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The goal of persuasion reveals the association of counterarguments to argumentation quality. Many quality criteria have been assessed for arguments, surveyed in (Wachsmuth et al., 2017a) . In the study of Habernal and Gurevych (2016) , one reason annotators gave for why an argument was more convincing than another was that it tackled flaws in the opposing view. Zhang et al. (2016) even found that debate winners tend to counter opposing arguments rather than focusing on their own arguments. Argument quality assessment is particularly important in retrieval scenarios. Existing approaches aim to retrieve documents that contain many claims (Roitman et al., 2016) or that provide most support for their claims (Braunstain et al., 2016) . In Wachsmuth et al. (2017c), we adapt PageRank to argumentative relations, in order to assess argument relevance objectively. While our search engine args for arguments on the web still uses content-based relevance measures in its first version (Wachsmuth et al., 2017b) , its long-term idea is to rank the best arguments highest. 1 The model present in this work finds the best counterarguments, but it is meant to be integrated into args at some point.", "cite_spans": [ { "start": 161, "end": 186, "text": "(Wachsmuth et al., 2017a)", "ref_id": "BIBREF33" }, { "start": 205, "end": 233, "text": "Habernal and Gurevych (2016)", "ref_id": "BIBREF14" }, { "start": 364, "end": 383, "text": "Zhang et al. (2016)", "ref_id": "BIBREF38" }, { "start": 644, "end": 666, "text": "(Roitman et al., 2016)", "ref_id": "BIBREF27" }, { "start": 713, "end": 738, "text": "(Braunstain et al., 2016)", "ref_id": "BIBREF7" }, { "start": 986, "end": 1011, "text": "(Wachsmuth et al., 2017b)", "ref_id": "BIBREF34" }, { "start": 1072, "end": 1073, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Like here, args uses idebate.org arguments. Others take data from that portal for studying support (Boltu\u017ei\u0107 and \u0160najder, 2014) or for the distant supervision of argument mining (Al-Khatib et al., 1 Argument search engine args: http://args.me 2016). Our corpus is not only larger, though, but it is the first to utilize a unique feature of idebate.org: the explicit specification of counterarguments.", "cite_spans": [ { "start": 99, "end": 127, "text": "(Boltu\u017ei\u0107 and \u0160najder, 2014)", "ref_id": "BIBREF6" }, { "start": 197, "end": 198, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This section introduces our ArguAna Counterargs corpus with argument-counterargument pairs, created automatically from the structure of idebate.org. The corpus is freely available at http://www. arguana.com/data. We also provide the code to replicate the construction process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ArguAna Counterargs Corpus", "sec_num": "3" }, { "text": "On the portal idebate.org, diverse controversial topics of usually rather general interest are discussed in debates, subsumed under 15 themes, such as \"economy\" and \"health\". Each debate has a title capturing a thesis on a topic, such as \"This House would limit the right to bear arms\", followed by an introductory text, a set of mostly elaborated and well-written points that have a pro or a con stance towards the thesis, and a bibliography.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Web Portal idebate.org", "sec_num": "3.1" }, { "text": "A specific feature of idebate.org is that virtually every point comes along with a counter that immediately attacks the point and its stance. Both points and counters can be seen as arguments. While a point consists of a one-sentence claim (the argument's conclusion) and a few sentences justifying the claim (the premise(s)), the counter's (opposite) conclusion remains implicit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Web Portal idebate.org", "sec_num": "3.1" }, { "text": "All arguments on the portal are established by a community with the goal of showing both sides of a topic in a balanced manner. We therefore assume each counter to be the best counterargument available for the respective point, and we use all resulting true argument pairs as the basis of our corpus. Figure 2 illustrates the italicized concepts, showing the structure of idebate.org. An example argument pair has been discussed in Section 1.", "cite_spans": [], "ref_spans": [ { "start": 301, "end": 309, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Web Portal idebate.org", "sec_num": "3.1" }, { "text": "We crawled all debates from idebate.org that follow the portal's theme-guided folder structure (last access: January 30, 2018). From each debate, we extracted the thesis, the introductory text, all points and counters, the bibliography, and some metadata. Each was stored separately in one plain text file, and we also created a file with the entire debate in its original order. Only points and counters are used in our experiments in Section 6. The underlying experiment settings are described in Section 4. Table 1 lists the number of debates crawled for each theme, along with the numbers of points and counters in the debates. The 26 found points without a counter are included in the corpus, but we do not use them in our experiments. In total, the ArguAna Counterargs corpus consists of 1069 debates with 6753 points that have a counter. The mean length of points is 196.3 words, whereas counters span only 129.6 words, largely due to the missing explicit conclusion. To avoid exploiting this corpus bias, no approach in our experiments captures length differences.", "cite_spans": [], "ref_spans": [ { "start": 510, "end": 517, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpus Construction", "sec_num": "3.2" }, { "text": "We split the corpus into a training set, consisting of the first 60% of all debates of each theme (ordered by alphabet), as well as a validation set and a test set, each covering 20%. The dataset sizes are found at the bottom of Table 1 . By putting all arguments from a debate into a single dataset, no specific topic knowledge can be gained from the training or validation set. We include all themes in all datasets, because we expect the set of themes to be stable.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 236, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3.4" }, { "text": "We checked for duplicates. Among the 13 532 point and counters, 3407 appear twice, 723 three times, 36 four times, and 1 five times. We ensure that no true pair is used as a false pair in our tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.4" }, { "text": "Based on the new corpus, we define the following eight counterargument retrieval tasks of different complexity. All tasks consider all true argumentcounterargument pairs, while differing in terms of what arguments (points and/or counters) from which context (same debate, same theme, or entire portal) are candidates for a given argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Counterargument Retrieval Tasks", "sec_num": "4" }, { "text": "Same Debate: Opposing Counters All counters in the same debate with stance opposite to the given argument are candidates ( Figure 2 : a, b). The task is to find the best counterargument among all counters to the argument's stance.", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 131, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Counterargument Retrieval Tasks", "sec_num": "4" }, { "text": "Same Debate: Counters All counters in the same debate irrespective of their stance are candidates (Figure 2 : a-c). The task is to find the best counterargument among all on-topic arguments phrased as counters. (Figure 2: a, b, d ). The task is to find the best among all on-topic counterarguments.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 107, "text": "(Figure 2", "ref_id": null }, { "start": 211, "end": 229, "text": "(Figure 2: a, b, d", "ref_id": null } ], "eq_spans": [], "section": "Counterargument Retrieval Tasks", "sec_num": "4" }, { "text": "Same Debate: Arguments All arguments in the same debate irrespective of their stance are candidates (Figure 2 : a-e). The task is to find the best counterargument among all on-topic arguments.", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 109, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Counterargument Retrieval Tasks", "sec_num": "4" }, { "text": "Same Theme: Counters All counters from the same theme are candidates (Figure 2 : a-c, f). The task is to find the best counterargument among all on-theme arguments phrased as counters.", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 78, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Counterargument Retrieval Tasks", "sec_num": "4" }, { "text": "Same Theme: Arguments All arguments from the same theme are candidates (Figure 2: a-g ). The task is to find the best counterargument among all on-theme arguments.", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 85, "text": "(Figure 2: a-g", "ref_id": null } ], "eq_spans": [], "section": "Counterargument Retrieval Tasks", "sec_num": "4" }, { "text": "Entire Portal: Counters All counters are candidates (Figure 2 : a-c, f, h). The task is to find the best counterargument among all arguments phrased as counters.", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 61, "text": "(Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Counterargument Retrieval Tasks", "sec_num": "4" }, { "text": "Entire Portal: Arguments All arguments are candidates (Figure 2 : a-i). The task is to find the best counterargument among all arguments. Table 2 lists the numbers of true and false pairs for each task. Experiment files containing the file paths of all candidate pairs are provided in our corpus.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 63, "text": "(Figure 2", "ref_id": null }, { "start": 138, "end": 145, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Counterargument Retrieval Tasks", "sec_num": "4" }, { "text": "The eight defined tasks indicate the subproblems of retrieving the best counterargument to a given argument: Finding all arguments that address the same topic, filtering those arguments with an opposite stance towards the topic, and identifying the best counter among these arguments. This section presents our approach to solving these problems computationally without prior knowledge of the argument's topic, based on the simultaneous similarity and dissimilarity of arguments. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retrieval of the Best Counterargument without Prior Topic Knowledge", "sec_num": "5" }, { "text": "We do not reinvent the wheel to assess topical relevance, but rather follow common practice. Concretely, we hypothesize a candidate counterargument to be on-topic if it is similar to the argument in terms of its words and its embedding. We capture these two types of similarity as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic as Word and Embedding Similarity", "sec_num": "5.1" }, { "text": "Word Argument Similarity To best represent the words in arguments, we did initial counterargument retrieval tests with token, stem, and lemma n-grams, n \u2208 {1, 2, 3}. While the differences were not large, stems worked best and stem 1-grams sufficed. Both might be a consequence of the limited data size. In our experiments in Section 6, we determine the stem 1-grams to be considered on the training set of each task. For word similarity computation, we tested four inverse vector-based distance measures: Cosine, Euclidean, Manhattan, and, Jaccard similarity (Cha, 2007) . On the validation sets, the Manhattan similarity performed best, closely followed by the Jaccard similarity. Both clearly outperformed Euclidean and especially Cosine similarity. This suggests that the presence and absence of words are equally important and that outliers should not be punished more. For brevity, we report only results for the Manhattan similarity below.", "cite_spans": [ { "start": 559, "end": 570, "text": "(Cha, 2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Topic as Word and Embedding Similarity", "sec_num": "5.1" }, { "text": "Embedding Argument Similarity We evaluated five pretrained word embedding models for representing arguments in first tests: GoogleNewsvectors (Mikolov et al., 2013) , ConceptNet Numberbatch (Speer et al., 2017) , wiki-news-300d-1M, wiki-news-300d-1M-subword, and crawl-300d-2M . The former two were competitive, the others performed notably worse. Since ConceptNet Numberbatch is smaller and supposed to have less bias, we used it in all experiments.", "cite_spans": [ { "start": 142, "end": 164, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF22" }, { "start": 190, "end": 210, "text": "(Speer et al., 2017)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Topic as Word and Embedding Similarity", "sec_num": "5.1" }, { "text": "To capture argument-level embedding similarity, we compared the four inverse vector-based distance measures above on average word embeddings against the inverse Word Mover's distance, which quantifies the optimum alignment of two word embedding sequences (Kusner et al., 2015) . This Word Mover's similarity consistently beat the others, so we decided to restrict our view to it.", "cite_spans": [ { "start": 255, "end": 276, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Topic as Word and Embedding Similarity", "sec_num": "5.1" }, { "text": "Stance classification without prior topic knowledge is challenging: While we can compare the topics of any two arguments, it is impossible in general to infer the stance of the specific aspects invoked by one argument to those of the other. As sketched in Section 2, related work employs external knowledge to infer stance relations of aspects and topics (Bar-Haim et al., 2017) or trains classifying attack relations (Cabrio and Villata, 2012) . Unfortunately, both does not apply to topics unseen before.", "cite_spans": [ { "start": 418, "end": 444, "text": "(Cabrio and Villata, 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Stance as Topic Dissimilarity", "sec_num": "5.2" }, { "text": "For argument pairs invoking similar aspects, a way to go is in principle to assess sentiment polarity; intuitively, two arguments with the same topic but opposite sentiment have opposing stance. However, we tested topic-agnostic sentiment lexicons (Baccianella et al., 2010) and state-of-the-art sentiment classifiers, trained on large-scale multipledomain review data (Prettenhofer and Stein, 2010; . The correlation between sentiment and stance differences of training arguments was close to zero. A possible explanation is the limited explicitness of sentiment on idebate.org, making the lexicons and classifiers fail there.", "cite_spans": [ { "start": 248, "end": 274, "text": "(Baccianella et al., 2010)", "ref_id": "BIBREF2" }, { "start": 369, "end": 399, "text": "(Prettenhofer and Stein, 2010;", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Stance as Topic Dissimilarity", "sec_num": "5.2" }, { "text": "Other related work suggests that the vocabulary of opposing sides differs (Beigman Klebanov et al., 2010) . We thus checked on the training set whether counterarguments are similar in their embeddings but dissimilar in their words. The measures above did not support this hypothesis, i.e., both embedding and word similarity increased the likelihood of a candidate counterargument being the best. Still, there must be a difference between an argument and its counterargument by concept. As a solution, we capture dissimilarity with the same similarity functions as above, but we change the granularity level on which we measure similarity.", "cite_spans": [ { "start": 74, "end": 105, "text": "(Beigman Klebanov et al., 2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Stance as Topic Dissimilarity", "sec_num": "5.2" }, { "text": "The arising question is how to assess similarity and dissimilarity at the same time. We hypothesize the best counterargument to be very similar in overall terms, but very dissimilar in certain respects. To capture this intuition, we rely on expert knowledge from argumentation theory (see Section 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simultaneous Similarity and Dissimilarity", "sec_num": "5.3" }, { "text": "In particular, we follow the notion that a counterargument attacks either the conclusion of an argument, the argument's premises, or both. As a consequence, we compute two word and two embedding similarities as specified above for each candidate counterargument; once to the argument's conclusion (called w c and e c for words and embeddings respectively) and once to the argument's premises (w p and e p ). Now, to capture similarity and dissimilarity simultaneously, we need multiple ways to aggregate conclusion and premise similarities. As we do not generally know which argument unit is attacked, we resort to four standard aggregation functions that generalize over the unit similarities. For words, these are the following word unit similarities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word and Embedding Unit Similarities", "sec_num": null }, { "text": "w \u2193 := min{w c , w p } w \u00d7 := w c \u2022 w p w \u2191 := max{w c , w p } w + := w c + w p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word and Embedding Unit Similarities", "sec_num": null }, { "text": "Accordingly, we define four respective embedding unit similarities, e \u2193 , e \u2191 , e \u00d7 , and e + . As mentioned above, both word similarity and embedding similarity positively affect the likelihood that a candidate is the best counterargument. Therefore, we combine each pair of similarities as w \u2193 + e \u2193 , w \u2191 + e \u2191 , w \u00d7 + e \u00d7 , and w + + e + , but we also evaluate their impact in isolation below. 3 Counterargument Scoring Model Based on the unit similarities, we finally define a scoring model for a given pair of argument and candidate counterargument. The model includes two unit similarity values, sim and dissim, but dissim is subtracted from sim, such that it actually favors dissimilarity. Thereby, we realize the topic and stance similarity sketched in Figure 1 . We weight the two values with a damping factor \u03b1:", "cite_spans": [ { "start": 398, "end": 399, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 762, "end": 770, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Word and Embedding Unit Similarities", "sec_num": null }, { "text": "\u03b1 \u2022 sim \u2212 (1 \u2212 \u03b1) \u2022 dissim", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word and Embedding Unit Similarities", "sec_num": null }, { "text": "where sim, dissim \u2208 {w \u2193 +e \u2193 , w \u2191 +e \u2191 , w \u00d7 +e \u00d7 , w + + e + } and sim = dissim.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word and Embedding Unit Similarities", "sec_num": null }, { "text": "The general idea of the scoring model is that sim rewards one type of similarity, whereas subtracting dissim punishes another type. We seek to thereby find the most dissimilar candidate among the similar candidates. The model is meant to give a higher score to a pair the more likely the candidate is the best counterargument to the argument, so the scores can be used for ranking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word and Embedding Unit Similarities", "sec_num": null }, { "text": "What combination of sim and dissim turns out best, is hard to foresee and may depend on the retrieval task at hand. We hence evaluate different combinations empirically below. The same holds for the damping factor \u03b1 \u2208 [0, 1]. If our hypothesis on similarity and dissimilarity is true, then the best \u03b1 should be close to but lower than 1. Conversely, if \u03b1 = 1.0 achieves the best performance, then only similarity would be captured by our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word and Embedding Unit Similarities", "sec_num": null }, { "text": "We now report on systematic ranking experiments with our counterargument scoring model. The goal is to evaluate on all eight retrieval tasks from Section 4 to what extent our hypothesis holds that the best counterargument to an argument invokes the same aspects while having opposing stance. The Java source code of the experiments is available at: http://www.arguana.com/software", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "6" }, { "text": "We evaluated the following set-up of tasks, data, measures, baselines, and approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "Tasks We tackled each of the eight retrieval tasks as a ranking problem, i.e., we aimed to rank the best counterargument to each argument highest, given all candidates. Accordingly, only one candidate counterargument per argument is correct. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "4 One alternative would be to see each argument pair as one instance of a classification problem. However, our preliminary tests confirmed the intuition that identifying the best counterargument is hard without knowing the other candidates, i.e., there is no general (dis)similarity threshold that makes an argument the best counterargument. Rather, how similar or dissimilar a counterargument needs to be depends on the topic and on the other candidates. Another alternative would be to treat all candidates for an argument as one instance, but this makes the experimental set-up very intricated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "Data Table 2 has shown the true and false argument pairs in all datasets. We undersampled each training set, resulting in 4065 true and 4065 false training pairs in all tasks. 5 Our model does not do any learning-to-rank on these pairs, but we derived lexicons for the word similarities from them (all stems included in at least 1% of all pairs). As detailed below, we then determined the best model configurations on the validation sets and evaluated these configurations on the test sets.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "Measures As only one candidate is true per argument, we report the accuracy@1 of each approach, i.e., the percentage of arguments for which the true counterargument was ranked highest. Besides, we compute the rounded mean rank of the best counterargument in all rankings, reflecting the average performance of an approach. Exemplarily, we also mention the mean reciprocal rank (MRR), which is more sensitive to outliers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "Baselines A trivial way to address the given tasks is to pick any candidate by chance for each argument. This random baseline allows quantifying the impact of other approaches. As counterargument retrieval has not been tackled yet, we do not use any existing baseline. 6 Instead, we evaluate the effects of the different building blocks of our scoring model. On one hand, we check the need for distinguishing conclusions and premises by comparing to the word argument similarity (w) and the embedding argument similarity (e). On the other hand, we consider all eight word and embedding unit similarities (w \u2193 , w \u2191 , . . . , e + ) as baselines, in order to see whether and how to best aggregate them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "Approaches After initial tests, we reduced the set of tested values of the damping factor \u03b1 in our scoring model to {0.8, 0.9, 1.0}. On the validation sets of the first six tasks, 7 we then analyzed all possible combinations of w \u2193 +e \u2193 , w \u2191 +e \u2191 , w \u00d7 +e \u00d7 , w + + e + , as well as w + e for sim and dissim. Three configurations of the model turned out best:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "we := 1.0 \u2022 (w \u00d7 + e \u00d7 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "we \u2193 := 0.9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "\u2022 (w \u00d7 + e \u00d7 ) \u2212 0.1 \u2022 (w \u2193 + e \u2193 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "we \u2191 := 0.9 Table 3 : Test set accuracy of ranking the best counterargument highest (@1) and mean rank (R) for 14 baselines and approaches (w, e, w \u2193 , . . . , r) in all eight tasks (given by Context and Candidates). Each best accuracy value (bold) significantly outperforms the best baseline with 99% ( \u2020) or 99.9% ( \u2021) confidence.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "\u2022 (w + + e + ) \u2212 0.1 \u2022 (w \u2191 + e \u2191 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "we was best on the validation set of Same Debate: Opposing Arguments (accuracy@1: 62.1) and we \u2193 on the one of Same Debate: Arguments (49.0). All other tasks were dominated by we \u2191 . Especially, we \u2191 was better than 1.0 \u2022 (w + + e + ) in all of them with clear leads of up to 2.2 points. This underlines the importance of modeling dissimilarity for counterargument retrieval. We took we, we \u2193 , and we \u2191 as our approaches for the test set. 8 Table 3 shows the accuracy@1 and the mean rank of all baselines and approaches on each of the eight given retrieval tasks. Overall, the counter-only tasks seem slightly harder within the same debate (comparing Counters to Opposing), i.e., stance is harder to assess than topical relevance. Conversely, the other Counters tasks seem easier, suggesting that topically close but false candidate counterarguments with the same stance as the argument (which are not included in any Counters task) are classified wrongly most often. Besides, these results support that potential differences in the phrasing of counters are not exploited, as desired.", "cite_spans": [], "ref_spans": [ { "start": 442, "end": 449, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Experimental Set-up", "sec_num": "6.1" }, { "text": "The accuracy of the standard similarity measures, w and e, goes from 65.9 and 62.9 respectively in the smallest task down to 21.8 and 23.9 in the largest. 8 All validation set results are found in the supplementary material, which we provide at http://www.arguana. com/publications w is stronger when only counters are candidates, e otherwise. This implies that words capture differences between the best and other counters, whereas embeddings rather help discard false candidates with the same stance as the argument.", "cite_spans": [ { "start": 155, "end": 156, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "From the eight unit similarity baselines, w + performs best on five tasks (e \u00d7 twice, w \u00d7 once). w + finds 71.5% true counterarguments among all opposing counters in a debate, and 28.6% among all test arguments from the entire portal. In that task, however, the mean ranks of w + (33) and particularly of w \u00d7 (354) are much worse than for e \u00d7 (21), meaning that words are insufficient to robustly find counterarguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "we, we \u2193 , and we \u2191 outperform all baselines in all tasks, improving the accuracy by 8.1 (Same Theme: Arguments) to 10.3 points (Entire Portal: Counters) over w and e, and at least 3.0 over the best baseline in each task. Among all opposing arguments from the same debate (true-to-false ratio 1:6.6), we finds 60.3% of the best counterarguments, 44.9% when all arguments are given (1:13.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "The winner in our evaluation is we \u2191 , though, being best in five of the eight tasks. It found the true among all opposing counters in 74.5% of all cases, and about every third time (32.4) among all 2801 test set arguments; a setting where the random baseline has virtually no chance. Given all arguments from the same theme, we \u2191 puts the best counterargument at a mean rank of 5 (MRR 0.58), and for the entire portal still at 15 (MRR 0.5). Table 4 : Accuracy@1 and mean rank of the best baseline (w + ) and approach (we \u2191 ) on each theme when all 2801 test set arguments are candidates.", "cite_spans": [], "ref_spans": [ { "start": 442, "end": 449, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "Although our scoring model thus does not solve the retrieval tasks, we conclude that it serves as a robust approach to rank the best counterargument high.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "To test significance, we separately computed the accuracy@1 for the arguments from each theme. The differences between the 15 values of the best approach on each task and those of the best baseline (w + , w \u00d7 , or e \u00d7 ) were normally distributed. Since the baselines and approaches are dependent, we used a one-tailed dependent t-test with paired samples. As Table 3 specifies, our approaches are consistently better, partly with at least 99% confidence, partly even with 99.9% confidence.", "cite_spans": [], "ref_spans": [ { "start": 359, "end": 366, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "In Table 4 , we exemplarily detail the comparison of the best approach (we \u2191 ) to the best baseline (w + ) on Entire Portal: Arguments. The mean ranks across themes underline the robustness of we \u2191 , being in the top 10 for 7 and in the top 20 even for 13 themes. Still, the accuracy@1 of both w + and we \u2191 varies notably, in case of we \u2191 from 12.1 for free speech debate to 46.7 for sport. For free speech debates (e.g., \"This House would criminalise blasphemy\"), we observed that their arguments tend to be overproportionally long, which might lead to deviating similarities. In case of sports, the topical specificity (e.g., \"This House would ban boxing\") reduces the probability of mistakenly choosing candidates from other themes.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "Free speech debate turned out the hardest theme in seven tasks, health in the remaining one. Besides sports, in some tasks the best results were obtained for religion and science, both of which share the characteristic of dealing with very specific topics. 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.2" }, { "text": "This paper has asked how to find the best counterargument to any argument without prior knowledge of the argument's topic. We did not aim to engineer the best approach to this retrieval task, but to study whether we can model the simultaneous similarity and dissimilarity of a counterargument to an argument computationally. For the restricted domain of debate portal arguments, our main result is quite intriguing: The best model (we \u2191 ) rewards a high overall similarity to the argument's conclusion and premises while punishing a too high similarity to either of them. Despite its simplicity, we \u2191 found the best counterargument among 2801 candidates in almost a third of all cases, and ranked it into the top 15 on average. This speaks for our hypothesis that the best counterargument often just addresses the same topical aspects with opposite stance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Of course, our hypothesis is simplifying, i.e., there are counterarguments that will not be found based on aspect and stance similarity only. Apart from some hyperparameters, however, our model is unsupervised and it does not make any assumption about an argument's topic. Hence, it applies to any argument, given a pool of candidate counterarguments. While the model can be considered open-topic, a next step will be to study counterargument retrieval open-source.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We are confident that the modeled intuition generalizes beyond idebate.org. To obtain further insights into the nature of counterarguments, deeper linguistic analysis along with supervised learning may be needed, though. We provide a corpus to train respective approaches, but leave the according research to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The intended practical application of our model is to retrieve counterarguments in automatic debating technologies (Rinott et al., 2015) and argument search (Wachsmuth et al., 2017b) . While debate portal arguments are often suitable in this regard, in general not always a real counterargument exists to an argument. Still, returning one that addresses similar aspects with opposite stance makes sense then. An alternative would be to generate counterarguments, but we believe that humans are better than machines in writing them -currently.", "cite_spans": [ { "start": 115, "end": 136, "text": "(Rinott et al., 2015)", "ref_id": "BIBREF26" }, { "start": 157, "end": 182, "text": "(Wachsmuth et al., 2017b)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "As indicated above, counters on idebate.org (including all true counterarguments) may also differ linguistically from points (all of which are false). However, we assume this to be a specific corpus bias and hence do not explicitly account for it. Section 6 will show whether having both points and counters as candidates makes counterargument retrieval harder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In principle, other unit similarities could be used for words than for embeddings. However, we decided to couple them to maintain interpretability of our experiment results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Undersampling was done stratified, such that the same number of false counterarguments was taken from each type, b-i, inFigure 2that is relevant in the respective task.6 Notice, though, that we tested a number of approaches to identify opposing stance, as discussed in Section 5.7 We did not expect \"game-changing\" validation set results for the last two tasks and, so, left them out for time reasons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The individual results of the best approach and baseline on each theme are also found in the supplementary material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unit segmentation of argumentative texts", "authors": [ { "first": "Yamen", "middle": [], "last": "Ajjour", "suffix": "" }, { "first": "Wei-Fan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Kiesel", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "118--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yamen Ajjour, Wei-Fan Chen, Johannes Kiesel, Hen- ning Wachsmuth, and Benno Stein. 2017. Unit seg- mentation of argumentative texts. In Proceedings of the 4th Workshop on Argument Mining, pages 118- 128. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Crossdomain mining of argumentative text through distant supervision", "authors": [ { "first": "Khalid", "middle": [], "last": "Al-Khatib", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Hagen", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "K\u00f6hler", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1395--1404", "other_ids": { "DOI": [ "10.18653/v1/N16-1165" ] }, "num": null, "urls": [], "raw_text": "Khalid Al-Khatib, Henning Wachsmuth, Matthias Ha- gen, Jonas K\u00f6hler, and Benno Stein. 2016. Cross- domain mining of argumentative text through dis- tant supervision. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 1395-1404. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining", "authors": [ { "first": "Stefano", "middle": [], "last": "Baccianella", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Esuli", "suffix": "" }, { "first": "Fabrizio", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10). European Languages Resources Association (ELRA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. SentiWordNet 3.0: An enhanced lexi- cal resource for sentiment analysis and opinion min- ing. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10). European Languages Resources Associ- ation (ELRA).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Stance classification of context-dependent claims", "authors": [ { "first": "Roy", "middle": [], "last": "Bar-Haim", "suffix": "" }, { "first": "Indrajit", "middle": [], "last": "Bhattacharya", "suffix": "" }, { "first": "Francesco", "middle": [], "last": "Dinuzzo", "suffix": "" }, { "first": "Amrita", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "251--261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy Bar-Haim, Indrajit Bhattacharya, Francesco Din- uzzo, Amrita Saha, and Noam Slonim. 2017. Stance classification of context-dependent claims. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 251-261. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Vocabulary choice as an indicator of perspective", "authors": [ { "first": "Eyal", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Beigman", "suffix": "" }, { "first": "", "middle": [], "last": "Diermeier", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2010. Vocabulary choice as an indica- tor of perspective. In Proceedings of the ACL 2010", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "253--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference Short Papers, pages 253-257. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Back up your stance: Recognizing arguments in online discussions", "authors": [ { "first": "Filip", "middle": [], "last": "Boltu\u017ei\u0107", "suffix": "" }, { "first": "Jan", "middle": [], "last": "\u0160najder", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "49--58", "other_ids": { "DOI": [ "10.3115/v1/W14-2107" ] }, "num": null, "urls": [], "raw_text": "Filip Boltu\u017ei\u0107 and Jan \u0160najder. 2014. Back up your stance: Recognizing arguments in online discus- sions. In Proceedings of the First Workshop on Ar- gumentation Mining, pages 49-58. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Supporting human answers for advice-seeking questions in CQA sites", "authors": [ { "first": "Liora", "middle": [], "last": "Braunstain", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Kurland", "suffix": "" }, { "first": "David", "middle": [], "last": "Carmel", "suffix": "" }, { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Shtok", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 38th European Conference on IR Research", "volume": "", "issue": "", "pages": "129--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liora Braunstain, Oren Kurland, David Carmel, Idan Szpektor, and Anna Shtok. 2016. Supporting human answers for advice-seeking questions in CQA sites. In Proceedings of the 38th European Conference on IR Research, pages 129-141.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Combining textual entailment and argumentation theory for supporting online debates interactions", "authors": [ { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "208--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Cabrio and Serena Villata. 2012. Combining tex- tual entailment and argumentation theory for sup- porting online debates interactions. In Proceed- ings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 208-212. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions", "authors": [ { "first": "Sung-Hyuk", "middle": [], "last": "Cha", "suffix": "" } ], "year": 2007, "venue": "International Journal of Mathematical Models and Methods in Applied Sciences", "volume": "1", "issue": "4", "pages": "300--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sung-Hyuk Cha. 2007. Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions. International Journal of Math- ematical Models and Methods in Applied Sciences, 1(4):300-307.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Identifying attack and support argumentative relations using deep learning", "authors": [ { "first": "Oana", "middle": [], "last": "Cocarascu", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Toni", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1374--1379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oana Cocarascu and Francesca Toni. 2017. Identify- ing attack and support argumentative relations us- ing deep learning. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 1374-1379. Association for Com- putational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments", "authors": [ { "first": "T", "middle": [ "Edward" ], "last": "Damer", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Edward Damer. 2009. Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments, 6th edition. Wadsworth, Cengage Learning, Belmont, CA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Classifying arguments by scheme", "authors": [ { "first": "Vanessa", "middle": [], "last": "Wei Feng", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "987--996", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vanessa Wei Feng and Graeme Hirst. 2011. Classify- ing arguments by scheme. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 987-996. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Practical Study of Argument", "authors": [ { "first": "Trudy", "middle": [], "last": "Govier", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Trudy Govier. 2010. A Practical Study of Argument, 7th edition. Wadsworth, Cengage Learning, Bel- mont, CA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "What makes a convincing argument? Empirical analysis and detecting attributes of convincingness in web argumentation", "authors": [ { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1214--1223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Habernal and Iryna Gurevych. 2016. What makes a convincing argument? Empirical analysis and de- tecting attributes of convincingness in web argumen- tation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1214-1223. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Before name-calling: Dynamics and triggers of ad hominem fallacies in web argumentation", "authors": [ { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2018, "venue": "16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. 2018. Before name-calling: Dy- namics and triggers of ad hominem fallacies in web argumentation. In 16th Annual Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies. Association for Computational Linguistics, to appear.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Argument relation classification using a joint inference model", "authors": [ { "first": "Yufang", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Jochim", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "60--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yufang Hou and Charles Jochim. 2017. Argument rela- tion classification using a joint inference model. In Proceedings of the 4th Workshop on Argument Min- ing, pages 60-66. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "2", "issue": "", "pages": "427--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [ "J" ], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [ "I" ], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32Nd International Conference on International Conference on Machine Learning", "volume": "37", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kil- ian Q. Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32Nd In- ternational Conference on International Conference on Machine Learning -Volume 37, pages 957-966.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Mining argumentative structure from natural language text using automatically generated premise-conclusion topic models", "authors": [ { "first": "John", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Reed", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lawrence and Chris Reed. 2017. Mining argu- mentative structure from natural language text using automatically generated premise-conclusion topic models. In Proceedings of the 4th Workshop on Ar- gument Mining, pages 39-48. Association for Com- putational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Topic-based agreement and disagreement in us electoral manifestos", "authors": [ { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Nanni", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Tonelli", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2938--2944", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefano Menini, Federico Nanni, Simone Paolo Ponzetto, and Sara Tonelli. 2017. Topic-based agree- ment and disagreement in us electoral manifestos. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2938-2944. Association for Computational Linguis- tics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Advances in pre-training distributed word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2017. Ad- vances in pre-training distributed word representa- tions. CoRR.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems - Volume 2, pages 3111-3119.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Argument mining with structured SVMs and RNNs", "authors": [ { "first": "Vlad", "middle": [], "last": "Niculae", "suffix": "" }, { "first": "Joonsuk", "middle": [], "last": "Park", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "985--995", "other_ids": { "DOI": [ "10.18653/v1/P17-1091" ] }, "num": null, "urls": [], "raw_text": "Vlad Niculae, Joonsuk Park, and Claire Cardie. 2017. Argument mining with structured SVMs and RNNs. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 985-995. Association for Com- putational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Towards detecting counter-considerations in text", "authors": [ { "first": "Andreas", "middle": [], "last": "Peldszus", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2nd Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "104--109", "other_ids": { "DOI": [ "10.3115/v1/W15-0513" ] }, "num": null, "urls": [], "raw_text": "Andreas Peldszus and Manfred Stede. 2015. Towards detecting counter-considerations in text. In Proceed- ings of the 2nd Workshop on Argumentation Mining, pages 104-109. Association for Computational Lin- guistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Crosslanguage text classification using structural correspondence learning", "authors": [ { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1118--1127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Prettenhofer and Benno Stein. 2010. Cross- language text classification using structural corre- spondence learning. In Proceedings of the 48th An- nual Meeting of the Association for Computational Linguistics, pages 1118-1127. Association for Com- putational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Show me your evidence -An automatic method for context dependent evidence detection", "authors": [ { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Lena", "middle": [], "last": "Dankin", "suffix": "" }, { "first": "Carlos", "middle": [ "Alzate" ], "last": "Perez", "suffix": "" }, { "first": "M", "middle": [ "Mitesh" ], "last": "Khapra", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "440--450", "other_ids": { "DOI": [ "10.18653/v1/D15-1050" ] }, "num": null, "urls": [], "raw_text": "Ruty Rinott, Lena Dankin, Carlos Alzate Perez, M. Mitesh Khapra, Ehud Aharoni, and Noam Slonim. 2015. Show me your evidence -An au- tomatic method for context dependent evidence de- tection. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 440-450. Association for Computational Lin- guistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "On the retrieval of wikipedia articles containing claims on controversial topics", "authors": [ { "first": "Haggai", "middle": [], "last": "Roitman", "suffix": "" }, { "first": "Shay", "middle": [], "last": "Hummel", "suffix": "" }, { "first": "Ella", "middle": [], "last": "Rabinovich", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Sznajder", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Slonim", "suffix": "" }, { "first": "Ehud", "middle": [], "last": "Aharoni", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th International Conference on World Wide Web, Companion Volume", "volume": "", "issue": "", "pages": "991--996", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haggai Roitman, Shay Hummel, Ella Rabinovich, Ben- jamin Sznajder, Noam Slonim, and Ehud Aharoni. 2016. On the retrieval of wikipedia articles contain- ing claims on controversial topics. In Proceedings of the 25th International Conference on World Wide Web, Companion Volume, pages 991-996.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "From argumentation mining to stance classification", "authors": [ { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Matwin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2nd Workshop on Argumentation Mining", "volume": "", "issue": "", "pages": "67--77", "other_ids": { "DOI": [ "10.3115/v1/W15-0509" ] }, "num": null, "urls": [], "raw_text": "Parinaz Sobhani, Diana Inkpen, and Stan Matwin. 2015. From argumentation mining to stance classifi- cation. In Proceedings of the 2nd Workshop on Ar- gumentation Mining, pages 67-77. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "authors": [ { "first": "Robert", "middle": [], "last": "Speer", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chin", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "4444--4451", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 4444-4451.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Recognizing the absence of opposing arguments in persuasive essays", "authors": [ { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Third Workshop on Argument Mining (ArgMining2016)", "volume": "", "issue": "", "pages": "113--118", "other_ids": { "DOI": [ "10.18653/v1/W16-2813" ] }, "num": null, "urls": [], "raw_text": "Christian Stab and Iryna Gurevych. 2016. Recogniz- ing the absence of opposing arguments in persuasive essays. In Proceedings of the Third Workshop on Argument Mining (ArgMining2016), pages 113-118. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions", "authors": [ { "first": "Chenhao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Vlad", "middle": [], "last": "Niculae", "suffix": "" }, { "first": "Cristian", "middle": [], "last": "Danescu-Niculescu-Mizil", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th International World Wide Web Conference", "volume": "", "issue": "", "pages": "613--624", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chenhao Tan, Vlad Niculae, Cristian Danescu- Niculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Pro- ceedings of the 25th International World Wide Web Conference, pages 613-624.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The Uses of Argument", "authors": [ { "first": "Stephen", "middle": [ "E" ], "last": "Toulmin", "suffix": "" } ], "year": 1958, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen E. Toulmin. 1958. The Uses of Argument. Cambridge University Press.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Computational argumentation quality assessment in natural language", "authors": [ { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Nona", "middle": [], "last": "Naderi", "suffix": "" }, { "first": "Yufang", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bilu", "suffix": "" }, { "first": "Vinodkumar", "middle": [], "last": "Prabhakaran", "suffix": "" }, { "first": "Tim", "middle": [ "Alberdingk" ], "last": "Thijm", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Hirst", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "176--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Al- berdingk Thijm, Graeme Hirst, and Benno Stein. 2017a. Computational argumentation quality assess- ment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 1, Long Papers, pages 176-187. Association for Computa- tional Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Building an argument search engine for the web", "authors": [ { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Khalid", "middle": [ "Al" ], "last": "Khatib", "suffix": "" }, { "first": "Yamen", "middle": [], "last": "Ajjour", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Puschmann", "suffix": "" }, { "first": "Jiani", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Dorsch", "suffix": "" }, { "first": "Viorel", "middle": [], "last": "Morari", "suffix": "" }, { "first": "Janek", "middle": [], "last": "Bevendorff", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 4th Workshop on Argument Mining", "volume": "", "issue": "", "pages": "49--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017b. Building an argument search engine for the web. In Proceedings of the 4th Workshop on Argument Mining, pages 49-59. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "PageRank\" for Argument Relevance", "authors": [ { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Yamen", "middle": [], "last": "Ajjour", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1117--1127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henning Wachsmuth, Benno Stein, and Yamen Ajjour. 2017c. \"PageRank\" for Argument Relevance. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 1117-1127. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Fundamentals of Critical Argumentation", "authors": [ { "first": "Douglas", "middle": [], "last": "Walton", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Walton. 2006. Fundamentals of Critical Argu- mentation. Cambridge University Press.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Objections, rebuttals and refutations", "authors": [ { "first": "Douglas", "middle": [], "last": "Walton", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas Walton. 2009. Objections, rebuttals and refu- tations. pages 1-10.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Conversational flow in Oxford-style debates", "authors": [ { "first": "Justine", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ravi", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "Cristian", "middle": [], "last": "Danescu-Niculescu-Mizil", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "136--141", "other_ids": { "DOI": [ "10.18653/v1/N16-1017" ] }, "num": null, "urls": [], "raw_text": "Justine Zhang, Ravi Kumar, Sujith Ravi, and Cristian Danescu-Niculescu-Mizil. 2016. Conversational flow in Oxford-style debates. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 136-141. Asso- ciation for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Modeling the simultaneous similarity and dissimilarity of a counterargument to an argument.", "num": null }, "TABREF0": { "num": null, "type_str": "table", "content": "
true argument pairother debates from same themedebates from other themes
other points with same stancepoints with opposite stancepoints from other debatespoints from other themes
argument conclusion(e)(d)(g)(i)
argument premise(s)
............
counters to same stancecounters to opposite stancecounters from other debatescounters from other themes
(conclusion implicit)(a)(b)(c)(f)(h)
argument premise(s)
Figure 2: ThemeDebatesPoints Counters
Culture46278278
Digital freedoms48341341
Economy95590588
Education58382381
Environment36215215
Free speech debate43274273
Health57334333
International19613151307
Law116732730
Philosophy50320320
Politics155982978
Religion30179179
Science41271269
Society75436431
Sport23130130
Training set64440834065
Validation set21112901287
Test set21414061401
counterargs-18106967796753
", "html": null, "text": "Structure of idebate.org for one true argument pair in our corpus. Colors denote matching stance; we assume arguments from other debates to have no stance towards a point. Points have a conclusion and premises, counters only premises. (a)-(i) are used in Section 4 to specify the candidates in different tasks." }, "TABREF1": { "num": null, "type_str": "table", "content": "", "html": null, "text": "Distribution of debates, points, and counters over the themes in the counterargs-18 corpus. The bottom rows show the size of the datasets." }, "TABREF3": { "num": null, "type_str": "table", "content": "
Same Debate: Opposing Arguments All argu-
ments in the same debate with opposite stance are
candidates
", "html": null, "text": "Number of true and false argument-counterargument pairs as well as their ratio for each evaluated context and type of candidate counterarguments in the three datasets. Each line defines one retrieval task." } } } }