{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:45.116217Z" }, "title": "An Exploratory Study of Argumentative Writing by Young Students: A Transformer-based Approach", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "", "affiliation": {}, "email": "dghosh@ets.org" }, { "first": "Beata", "middle": [ "Beigman" ], "last": "Klebanov", "suffix": "", "affiliation": {}, "email": "bbeigmanklebanov@ets.org" }, { "first": "Yi", "middle": [], "last": "Song", "suffix": "", "affiliation": {}, "email": "ysong@ets.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a computational exploration of argument critique writing by young students. Middle school students were asked to criticize an argument presented in the prompt, focusing on identifying and explaining the reasoning flaws. This task resembles an established college-level argument critique task. Lexical and discourse features that utilize detailed domain knowledge to identify critiques exist for the college task but do not perform well on the young students data. Instead, transformerbased architecture (e.g., BERT) fine-tuned on a large corpus of critique essays from the college task performs much better (over 20% improvement in F1 score). Analysis of the performance of various configurations of the system suggests that while children's writing does not exhibit the standard discourse structure of an argumentative essay, it does share basic local sequential structures with the more mature writers.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present a computational exploration of argument critique writing by young students. Middle school students were asked to criticize an argument presented in the prompt, focusing on identifying and explaining the reasoning flaws. This task resembles an established college-level argument critique task. Lexical and discourse features that utilize detailed domain knowledge to identify critiques exist for the college task but do not perform well on the young students data. Instead, transformerbased architecture (e.g., BERT) fine-tuned on a large corpus of critique essays from the college task performs much better (over 20% improvement in F1 score). Analysis of the performance of various configurations of the system suggests that while children's writing does not exhibit the standard discourse structure of an argumentative essay, it does share basic local sequential structures with the more mature writers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Argument and logic are essential in academic writing as they enhance the critical thinking capacities of students. Argumentation requires systematic reasoning and the skill of using relevant examples to craft a support for one's point of view (Walton, 1996) . In recent times, the surge in AI-informed scoring systems has made it possible to assess writing skills using automated systems. Recent research suggests the possibility of argumentationaware automated essay scoring systems (Stab and Gurevych, 2017b) .", "cite_spans": [ { "start": 243, "end": 257, "text": "(Walton, 1996)", "ref_id": "BIBREF21" }, { "start": 484, "end": 510, "text": "(Stab and Gurevych, 2017b)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most of the current work on computational analysis of argumentative writing in educational context focuses on automatically identifying the argument structures (e.g., argument components and their relations) in the essays (Stab and Gurevych, 2017a; Persing and Ng, 2016; Nguyen and Litman, 2016) and by predicting essay scores from features derived from the structures (e.g., the number of claims and premises and the number of supported claims) (Ghosh et al., 2016) . Related research has also addressed the problem of scoring a particular dimension of essay quality, such as relevance to the prompt (Persing and Ng, 2014) , opinions and their targets (Farra et al., 2015) , argument strength (Persing and Ng, 2015) , among others.", "cite_spans": [ { "start": 222, "end": 248, "text": "(Stab and Gurevych, 2017a;", "ref_id": "BIBREF18" }, { "start": 249, "end": 270, "text": "Persing and Ng, 2016;", "ref_id": "BIBREF15" }, { "start": 271, "end": 295, "text": "Nguyen and Litman, 2016)", "ref_id": "BIBREF11" }, { "start": 446, "end": 466, "text": "(Ghosh et al., 2016)", "ref_id": "BIBREF9" }, { "start": 601, "end": 623, "text": "(Persing and Ng, 2014)", "ref_id": "BIBREF13" }, { "start": 653, "end": 673, "text": "(Farra et al., 2015)", "ref_id": "BIBREF8" }, { "start": 694, "end": 716, "text": "(Persing and Ng, 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While argument mining literature has addressed the educational context, it has so far mainly focused on analyzing college-level writing. For instance, Nguyen and Litman (2018) investigated argument structures in TOEFL11 corpus (Blanchard et al., 2013) ; Beigman Klebanov et al. 2017and Persing and Ng (2015) analyzed writing of university students; Stab and Gurevych (2017b) used data from \"essayforum.com\", where college entrance examination is the largest forum. Computational analysis of arguments in young students' writing has not yet been done, to the best of our knowledge. Writing quality in essays by young writers has been addressed (Deane, 2014; Attali and Powers, 2008; Attali and Burstein, 2006) , but identification of arguments was not part of these studies.", "cite_spans": [ { "start": 151, "end": 175, "text": "Nguyen and Litman (2018)", "ref_id": "BIBREF12" }, { "start": 227, "end": 251, "text": "(Blanchard et al., 2013)", "ref_id": "BIBREF3" }, { "start": 286, "end": 307, "text": "Persing and Ng (2015)", "ref_id": "BIBREF14" }, { "start": 643, "end": 656, "text": "(Deane, 2014;", "ref_id": "BIBREF6" }, { "start": 657, "end": 681, "text": "Attali and Powers, 2008;", "ref_id": "BIBREF1" }, { "start": 682, "end": 708, "text": "Attali and Burstein, 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a novel learning-andassessment context where middle school students were asked to criticize an argument presented in the prompt, focusing on identifying and explaining the reasoning flaws. Using a relatively small pilot data collected for this task, our aim here is to automatically identify good argument critiques in the young students' writing, with the twin goals of (a) exploring the characteristics of young students' writing for this task, and (b) in view of potential scoring and feedback applications. We start with describing and exemplifying the data, as well as the argument critique annotation we performed on it (section 2). Experiments and results are pre-Dear Editor, Advertising aimed at children under 12 should be allowed for several reasons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "First, one family in my neighbourhood sits down and watches TV together almost every evening. The whole family learns a lot, which shows that advertising for children is always a good thing because it brings families together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, research shows that children can't remember commercials well anyway, so they can't be doing kids any harm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, the arguments against advertising aren't very effective. Some countries banned ads because kids thought the ads were funny. But that's not a good reason. Think about it: the advertising industry spends billions of dollars a year on ads for children. They wouldn't spend all the money if the ads weren't doing some good. Let's not hurt children by stopping a good thing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "If anyone doesn't like children's ads, the advertisers should just try to make them more interesting. The ads are allowed to be shown on TV, so they shouldn't be banned. sented in section 3, followed by a discussion in section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The data used in this study was collected as part of a pilot of a scenario-based assessment of argumentation skills with about 900 middle school students . 1 Students engaged in a sequence of steps in which they researched and reflected on whether advertising to children under the age of twelve should be banned. The test consists of four tasks; we use the responses to Task 3 in which students are asked to review a letter to the editor and evaluate problems in the letter's reasoning or use of evidence (see Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 511, "end": 518, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Dataset and Annotation", "sec_num": "2" }, { "text": "Students were expected to produce a written critique of the arguments, demonstrating their ability to identify and explain problems in the reasoning or use of evidence. For example, the first excerpt below shows a well-articulated critique of the hasty generalization problem in the prompt:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset and Annotation", "sec_num": "2" }, { "text": "(1) Just because it brings one family together to learn does not mean that it will bring all families together to learn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset and Annotation", "sec_num": "2" }, { "text": "(2) The first one about the family in your neighborhood is more like an opinion, not actual information from the article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset and Annotation", "sec_num": "2" }, { "text": "(3) Their claims are badly writtin [sic] and have no good arguments. They need to support their claims with SOLID evidence and only claim arguments that can be undecicive [sic] .", "cite_spans": [ { "start": 35, "end": 40, "text": "[sic]", "ref_id": null }, { "start": 171, "end": 176, "text": "[sic]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset and Annotation", "sec_num": "2" }, { "text": "However, many students had difficulty explaining the reasoning flaws clearly. In the second excerpt, the student thought that an argument from the family in the neighborhood is not strong, but did not demonstrate an understanding of a weak generalization in his explanation. Other common problems included students summarizing the prompt without criticizing, or providing a generic critique that does not adhere to the particulars of the prompt (excerpt (3)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset and Annotation", "sec_num": "2" }, { "text": "The goal of the argument critique annotation (described next) was to identify where in a response good critiques are made, such as the one in the first excerpt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset and Annotation", "sec_num": "2" }, { "text": "We identified 11 valid critiques of the arguments in the letter. These critiques included: (1) overgeneralizing from a single example; (2) example irrelevant to the argument; 3example misrepresenting what actually happened; (4) misrepresenting the goal of making advertisements; (5) misunderstanding the problem; (6) neglecting potential side effects of allowing advertising aimed at children; (7) making a wrong argument from sign; (8) argument contradicting authoritative evidence; (9) argument contradicting one's own experience; (10) making a circular argument; (11) making contradictory claims. All sentences containing any material belonging to a valid critique were marked and henceforth denoted as Arg; the rest are denoted as N oArg. Three annotators were employed to annotate the sentences to mark them as Arg/N oArg. We computed \u03ba between each pair of annotators based on the annotation of 50 essays. Inter-annotator agreement for this sentence-level Arg/N oArg classification for each pair of annotators was 0.714, 0.714, and 0.811, respectively resulting in an average \u03ba of 0.746.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation of Critiques:", "sec_num": null }, { "text": "We split the data into training (585 response critiques) and test (252 response critiques). The training partition has 2,220 sentences (515 Arg; 1,705 N oArg; average number of words per sentence is 11 (std = 8.03)); test contains 973 sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Descriptive statistics:", "sec_num": null }, { "text": "In this writing task, young students were asked to analyze the given prompt, focusing on identifying and explaining its reasoning flaws. This task is similar to a well-established task for college students previously discussed in the literature (Beigman Klebanov et al., 2017) . Compared to the college task, the prompt for children appears to have more obvious reasoning errors. The tasks also differ in the types of responses they elicit. While the college task elicits a full essay-length response, the current critique task elicits a shorter, less formal response.", "cite_spans": [ { "start": 245, "end": 276, "text": "(Beigman Klebanov et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "As our baseline, we evaluate the features that were reported as being effective for identifying argument critiques in the context of the college task. Beigman Klebanov et al. (2017) described a logistic regression classifier with two types of features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "\u2022 features capturing discourse structure, since it was found that argument critiques tended to occupy certain consistent discourse roles that are common in argumentative essays (such as the SUPPORT, rather than THESIS or BACK-GROUND roles), as well as have a tendency to participate in roles that receive a lot of elaboration, such as a SUPPORT sentence following or preceding another SUPPORT sentence, or a CONCLUSION sentence followed by another sentence in the same role.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "\u2022 features capturing content, based on hybrid word and POS ngrams (see Beigman Klebanov et al. (2017) for more detail). Table 2 shows the results, with each of the two subsets of features separately and together. Clearly, the classifier performs quite poorly for detecting Arg sentences in children's data. Secondly, it seems that whatever performance is achieved is due to the content features, while the structural features fail to detect Arg. Thus, the well-organized nature of the mature writing, where essays have identifiable discourse elements such as THESIS, MAIN CLAIM, SUPPORT, CON-CLUSION (Burstein et al., 2003) , does not seem to carry over to young students' less formal writing.", "cite_spans": [ { "start": 600, "end": 623, "text": "(Burstein et al., 2003)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 120, "end": 127, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "As the training dataset is relatively small, we leverage pre-trained language models that are shown to be effective in various NLP applications. Particularly, we focus on BERT (Devlin et al., 2018) , a bi-directional transformer (Vaswani et al., 2017) based architecture that has produced excellent performance on argumentation tasks such as argument component and relation identification (Chakrabarty et al., 2019) and argument clustering (Reimers et al., 2019 ). The BERT model is initially trained over a 3.3 billion word English corpus on two tasks: (1) given a sentence containing multiple masked words predict the identity of a particular masked word, and (2) given two sentences, predict whether they are adjacent. The BERT model exploits a multi-head attention operation to compute context-sensitive representations for each token in a sentence. During its training, a special token \" [CLS] \" is added to the beginning of each training utterance. During evaluation, the learned representation for this \"[CLS]\" token is processed by an additional layer with nonlinear activation. A standard pre-trained BERT model can be used for transfer learning when the model is \"fine-tuned\" during training, i.e., on the classification data of Arg and N oArg sentences (i.e., training partition) or by first fine-tuning the BERT language-model itself on a large unsupervised corpus from a partially relevant domain, such as a corpus of writings from advanced students and then again fine-tuned on the classification data. In both the cases, BERT makes predictions via the \"[CLS]\" token.", "cite_spans": [ { "start": 176, "end": 197, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF7" }, { "start": 229, "end": 251, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF20" }, { "start": 389, "end": 415, "text": "(Chakrabarty et al., 2019)", "ref_id": "BIBREF5" }, { "start": 440, "end": 461, "text": "(Reimers et al., 2019", "ref_id": null }, { "start": 893, "end": 898, "text": "[CLS]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Our system", "sec_num": "3.2" }, { "text": "Fine-tuning on classification data: We first fine-tune a pre-trained BERT model (the \"bertbase-uncased\" version) with the training data. During training the class weights are proportional to the numbers of Arg and N oArg instances. Unless stated otherwise we kept the following parameters throughout in the experiments: we utilize a batch size of 16 instances, learning_rate of 3e-5, warmup_proportion 0.1, and the Adam optimizer. Hyperparameters were tuned for only five epochs. This experiment is denoted as BERT bl in Table 3 . We observe that the F1 score for Arg is 56%, resulting in a 12% absolute improvement in F1 score over the structure+content features (Table 2) . This confirms that BERT is able to perform well even after fine-tuning with a relatively small training corpus with default parameters.", "cite_spans": [], "ref_spans": [ { "start": 521, "end": 528, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 664, "end": 673, "text": "(Table 2)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Our system", "sec_num": "3.2" }, { "text": "In the next step, we re-utilize the same pretrained BERT model while transforming the training instances to paired sentence instances, where the first sentence is the candidate Arg or N oArg sentence and the second sentence of the pair is the immediate next sentence in the essay. For instance, for the first example in section 2, \"Just because . . . to learn\", now the instance also contains the subsequent sentence: , A special token \"FINAL_SENTENCE\" is used when the candidate Arg or N oArg sentence is the last sentence in the essay. This modification of the data representation might help the BERT model for two reasons. First, pairing of the candidate sentence and the next one will encourage the model to more directly utilize the next sentence prediction task. Secondly, since multi-sentence same-discourse-role elaboration was found to be common in Beigman Klebanov et al. (2017) data, BERT may exploit such sequential structures if they at all exist in our data. This is model BERT pair in Table 3 . With the paired-sentences transformation of the instances the F1 improves to 61.2%, a boost of 5% over BERT bl .", "cite_spans": [ { "start": 1000, "end": 1022, "text": "Klebanov et al. (2017)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1134, "end": 1141, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Our system", "sec_num": "3.2" }, { "text": "Fine-tuning with a large essay corpus: It has been shown in related research (Chakrabarty et al., 2019 ) that transfer learning by fine-tuning on a domain-specific corpus using a supervised learning objective can boost performance. We used a large proprietary corpus of college-level argument critique essays similar to those analyzed by Beigman Klebanov et al. (2017) . This corpus consists of 351,363 unannotated essays, where an average essay contains 16 sentences, resulting in a corpus of 5.64 million sentences. We fine-tune the pre-trained BERT language model on this large corpus for five epochs and then again fine-tune it with the training partition (BERT bl+lm ). Likewise, BERT pair+lm represents the model after pretrained BERT language model is fine-tuned with the large corpus and then again fine-tuned with the paired instances of training. We observe that fine-tuning the language model improves F1 to 62.3% whereas BERT pair+lm results in the highest F1 of 65.8%, around 5% higher than BERT pair and over 20% higher than the feature-based model.", "cite_spans": [ { "start": 77, "end": 102, "text": "(Chakrabarty et al., 2019", "ref_id": "BIBREF5" }, { "start": 346, "end": 368, "text": "Klebanov et al. (2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Our system", "sec_num": "3.2" }, { "text": "The difference in F1 between BERT bl , BERT bl+lm , and BERT pairs+lm is almost exclusively in recall -they have comparable precision at about 0.6, with recall of 0.52, 0.64, and 0.74, respectively. Partitioning out 10% of the training data for a development set, we found that BERT bl+lm detected 13 more Arg sentences than BERT bl in the development data. These fell into two sequential patterns: (a) the sentence is followed by another that further develops the critique (7 cases) -see excerpts (4) and (5) below; (b) the sentence is the final sentence in the response (6 cases); excerpt (6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "(4) They werent made to be appealing to adults. They only need kids to want the product, and beg their parents for it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "(5) Finally, is spending billions of dollars on something that has no point a good thing? There are many arguements that all this money is just going to waste, and it could be used on more important things.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "(6) I say this because in an article I found out that children do remember advertisements that they have seen before.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Our interpretation of this finding is that BERT bl+lm captured organizational elements in children's writing that are similar to adult patterns. Beigman Klebanov et al. (2017) found that adult writers often reiterate a previously stated critique in an extended CONCLUSION and spread critiques across consecutive SUPPORT sentences. Thus, even though alignment of critiques with \"standard\" discourse elements such as CONCLU-SION and SUPPORT is not recognizable in children's writing (as witnessed by the failure of the structural features to detect critiques), some basic local sequential patterns do exist, and they are sufficiently similar to the ones in adult writing that a system with its language model tuned on adult critique writing can capitalize on this knowledge.", "cite_spans": [ { "start": 153, "end": 175, "text": "Klebanov et al. (2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "Interestingly, BERT pairs learned similar sequential patterns -indeed 7 of the 13 sentences gained by BERT bl+lm over BERT bl are also recalled by BERT pairs . This further reinforces the conclusion that young writers exhibit certain local sequential patterns of discourse organization that they share with mature argument critique writers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "We present a computational exploration of argument critiques written by middle school children. A feature set designed for college-level critique writing has poor recall of critiques when trained on children's data; a pre-trained BERT model fine-tuned on children's data does better by 18%. When BERT's language model is additionally fine-tuned on a large corpus of college critique essays, recall improves by further 20%, suggesting the existence of some similarity between young and mature writers. Performance analysis suggests that BERT capitalized on certain sequential patterns in critique writing; a larger study examining patterns of argumentation in children's data is needed to confirm the hypothesis. In future, we plan to fine-tune our models on auxiliary dataset, such as the convincing argument dataset from Habernal and Gurevych (2016) .", "cite_spans": [ { "start": 822, "end": 850, "text": "Habernal and Gurevych (2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "The data was collected under the ETS CBAL (Cognitively Based Assessment of, for, and as Learning) Initiative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automated essay scoring with e-rater v.2", "authors": [ { "first": "Yigal", "middle": [], "last": "Attali", "suffix": "" }, { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" } ], "year": 2006, "venue": "The Journal of Technology, Learning and Assessment", "volume": "4", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater v.2. The Journal of Technology, Learning and Assessment, 4(3).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A developmental writing scale", "authors": [ { "first": "Yigal", "middle": [], "last": "Attali", "suffix": "" }, { "first": "Don", "middle": [], "last": "Powers", "suffix": "" } ], "year": 2008, "venue": "ETS Research Report Series", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yigal Attali and Don Powers. 2008. A developmen- tal writing scale. ETS Research Report Series, 2008(1):i-59.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Detecting Good Arguments in a Non-Topic-Specific Way: An Oxymoron?", "authors": [ { "first": "Binod", "middle": [], "last": "Beata Beigman Klebanov", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Gyawali", "suffix": "" }, { "first": "", "middle": [], "last": "Song", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "244--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beata Beigman Klebanov, Binod Gyawali, and Yi Song. 2017. Detecting Good Arguments in a Non-Topic-Specific Way: An Oxymoron? In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 244-249, Vancouver, Canada. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Toefl11: A corpus of non-native english", "authors": [ { "first": "Daniel", "middle": [], "last": "Blanchard", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "Derrick", "middle": [], "last": "Higgins", "suffix": "" }, { "first": "Aoife", "middle": [], "last": "Cahill", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Chodorow", "suffix": "" } ], "year": 2013, "venue": "ETS Research Report Series", "volume": "2013", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Blanchard, Joel Tetreault, Derrick Higgins, Aoife Cahill, and Martin Chodorow. 2013. Toefl11: A corpus of non-native english. ETS Research Re- port Series, 2013(2):i-15.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Finding the WRITE Stuff: Automatic Identification of Discourse Structure in Student Essays", "authors": [ { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2003, "venue": "IEEE Intelligent Systems", "volume": "18", "issue": "1", "pages": "32--39", "other_ids": { "DOI": [ "10.1109/MIS.2003.1179191" ] }, "num": null, "urls": [], "raw_text": "Jill Burstein, Daniel Marcu, and Kevin Knight. 2003. Finding the WRITE Stuff: Automatic Identification of Discourse Structure in Student Essays. IEEE In- telligent Systems, 18(1):32-39.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Ampersand: Argument mining for persuasive online discussions", "authors": [ { "first": "Tuhin", "middle": [], "last": "Chakrabarty", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Hidey", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" }, { "first": "Alyssa", "middle": [], "last": "Hwang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2926--2936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathleen Mckeown, and Alyssa Hwang. 2019. Ampersand: Argument mining for persuasive online discussions. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2926-2936.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using writing process and product features to assess writing quality and explore how those features relate to other literacy tasks", "authors": [ { "first": "Paul", "middle": [], "last": "Deane", "suffix": "" } ], "year": 2014, "venue": "ETS Research Report Series", "volume": "2014", "issue": "1", "pages": "1--23", "other_ids": { "DOI": [ "10.1002/ets2.12002" ] }, "num": null, "urls": [], "raw_text": "Paul Deane. 2014. Using writing process and product features to assess writing quality and explore how those features relate to other literacy tasks. ETS Re- search Report Series, 2014(1):1-23.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Scoring persuasive essays using opinions and their targets", "authors": [ { "first": "Noura", "middle": [], "last": "Farra", "suffix": "" }, { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "64--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noura Farra, Swapna Somasundaran, and Jill Burstein. 2015. Scoring persuasive essays using opinions and their targets. In Proceedings of the Workshop on In- novative Use of NLP for Building Educational Ap- plications, pages 64-74.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Coarse-grained argumentation features for scoring persuasive essays", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Aquila", "middle": [], "last": "Khanam", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Han", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "549--554", "other_ids": { "DOI": [ "10.18653/v1/P16-2089" ] }, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Aquila Khanam, Yubo Han, and Smaranda Muresan. 2016. Coarse-grained argu- mentation features for scoring persuasive essays. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 549-554, Berlin, Germany. As- sociation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Which argument is more convincing? analyzing and predicting convincingness of web arguments using bidirectional lstm", "authors": [ { "first": "Ivan", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1589--1599", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Habernal and Iryna Gurevych. 2016. Which ar- gument is more convincing? analyzing and predict- ing convincingness of web arguments using bidirec- tional lstm. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1589-1599.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Context-aware argumentative relation mining", "authors": [ { "first": "Huy", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Diane", "middle": [], "last": "Litman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1127--1137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huy Nguyen and Diane Litman. 2016. Context-aware argumentative relation mining. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1127-1137.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Argument mining for improving the automated scoring of persuasive essays", "authors": [ { "first": "V", "middle": [], "last": "Huy", "suffix": "" }, { "first": "Diane", "middle": [ "J" ], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Litman", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huy V Nguyen and Diane J Litman. 2018. Argument mining for improving the automated scoring of per- suasive essays. In Thirty-Second AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Modeling prompt adherence in student essays", "authors": [ { "first": "Isaac", "middle": [], "last": "Persing", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1534--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Persing and Vincent Ng. 2014. Modeling prompt adherence in student essays. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1534-1543.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Modeling argument strength in student essays", "authors": [ { "first": "Isaac", "middle": [], "last": "Persing", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "543--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Persing and Vincent Ng. 2015. Modeling ar- gument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 543-552.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "End-to-end argumentation mining in student essays", "authors": [ { "first": "Isaac", "middle": [], "last": "Persing", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1384--1394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Persing and Vincent Ng. 2016. End-to-end ar- gumentation mining in student essays. In Proceed- ings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1384-1394.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Tilman", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Daxenberger", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.09821" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings. arXiv preprint arXiv:1906.09821.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Examining students' ability to critique arguments and exploring the implications for assessment and instruction", "authors": [ { "first": "Yi", "middle": [], "last": "Song", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Deane", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Fowles", "suffix": "" } ], "year": 2017, "venue": "ETS Research Report Series", "volume": "", "issue": "16", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Song, Paul Deane, and Mary Fowles. 2017. Exam- ining students' ability to critique arguments and ex- ploring the implications for assessment and instruc- tion. ETS Research Report Series, 2017(16):1-12.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Parsing argumentation structures in persuasive essays", "authors": [ { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "3", "pages": "619--659", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Stab and Iryna Gurevych. 2017a. Parsing ar- gumentation structures in persuasive essays. Com- putational Linguistics, 43(3):619-659.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Recognizing insufficiently supported arguments in argumentative essays", "authors": [ { "first": "Christian", "middle": [], "last": "Stab", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "980--990", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Stab and Iryna Gurevych. 2017b. Recogniz- ing insufficiently supported arguments in argumen- tative essays. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, pages 980-990, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Argumentation schemes for presumptive reasoning", "authors": [ { "first": "", "middle": [], "last": "Douglas N Walton", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas N Walton. 1996. Argumentation schemes for presumptive reasoning. Psychology Press.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "content": "", "num": null, "text": "The prompt of the argument critique task.", "html": null }, "TABREF2": { "type_str": "table", "content": "
: Performance of baseline features. \"Struc-
ture\" corresponds to the dr_pn feature set, \"Content\"
corresponds to the 1-3gr ppos feature set, both from
Beigman Klebanov et al. (2017).
", "num": null, "text": "", "html": null }, "TABREF4": { "type_str": "table", "content": "
: Performance of BERT transformer, various
configurations. Rows 1, 2 present results of BERT fine-
tuning with training data only; rows 3, 4 present the
effect of additional language model fine-tuning. High-
est scores are bold.
", "num": null, "text": "", "html": null } } } }