{ "paper_id": "P17-1011", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:15:35.201150Z" }, "title": "Discourse Mode Identification in Essays", "authors": [ { "first": "Wei", "middle": [], "last": "Song", "suffix": "", "affiliation": { "laboratory": "", "institution": "Capital Normal University", "location": { "settlement": "Beijing, Beijing", "country": "China, China" } }, "email": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ruiji", "middle": [], "last": "Fu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Lizhen", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Capital Normal University", "location": { "settlement": "Beijing, Beijing", "country": "China, China" } }, "email": "lzliu@cnu.edu.cn" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Harbin Institute of Technology", "location": { "settlement": "Harbin", "country": "China" } }, "email": "tliu@ir.hit.edu.cn" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "", "affiliation": {}, "email": "gphu@iflytek.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Discourse modes play an important role in writing composition and evaluation. This paper presents a study on the manual and automatic identification of narration, exposition, description, argument and emotion expressing sentences in narrative essays. We annotate a corpus to study the characteristics of discourse modes and describe a neural sequence labeling model for identification. Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0.7. We further demonstrate that discourse modes can be used as features that improve automatic essay scoring (AES). The impacts of discourse modes for AES are also discussed.", "pdf_parse": { "paper_id": "P17-1011", "_pdf_hash": "", "abstract": [ { "text": "Discourse modes play an important role in writing composition and evaluation. This paper presents a study on the manual and automatic identification of narration, exposition, description, argument and emotion expressing sentences in narrative essays. We annotate a corpus to study the characteristics of discourse modes and describe a neural sequence labeling model for identification. Evaluation results show that discourse modes can be identified automatically with an average F1-score of 0.7. We further demonstrate that discourse modes can be used as features that improve automatic essay scoring (AES). The impacts of discourse modes for AES are also discussed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Discourse modes, also known as rhetorical modes, describe the purpose and conventions of the main kinds of language based communication.Most common discourse modes include narration, description, exposition and argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A typical text would make use of all the modes, although in a given one there will often be a main mode. Despite their importance in writing composition and assessment (Braddock et al., 1963) , there is relatively little work on analyzing discourse modes based on computational models. We aim to contribute for automatic discourse mode identification and its application on writing assessment.", "cite_spans": [ { "start": 168, "end": 191, "text": "(Braddock et al., 1963)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The use of discourse modes is important in writing composition, because they relate to several aspects that would influence the quality of a text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "First, discourse modes reflect the organization of a text. Natural language texts consist of sen-tences which form a unified whole and make up the discourse (Clark et al., 2013) . Recognizing the structure of text organization is a key part for discourse analysis. Meurer (2002) points that discourse modes stand for unity as they constitute general patterns of language organization strategically used by the writer. Smith (2003) also proposes to study discourse passages from a linguistic view of point through discourse modes. The organization of a text can be realized by segmenting text into passages according to the set of discourse modes that are used to indicate the functional relationship between the several parts of the text. For example, the writer can present major events through narration, provide details with description and establish ideas with argument. The combination and interaction of various discourse modes make an organized unified text.", "cite_spans": [ { "start": 157, "end": 177, "text": "(Clark et al., 2013)", "ref_id": "BIBREF15" }, { "start": 272, "end": 278, "text": "(2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, discourse modes have rhetorical significance. Discourse modes are closely related to rhetoric (Connors, 1981; Brooks and Warren, 1958) , which offers a principle for learning how to express material in the best way. Discourse modes have different preferences on expressive styles. Narration mainly controls story progression by introducing and connecting events; exposition is to instruct or explain so that the language should be precise and informative; argument is used to convince or persuade through logical and inspiring statements; description attempts to bring detailed observations of people and scenery, which is related to the writing of figurative language; the way to express emotions may relate to the use of rhetorical devices and poetic language. Discourse modes reflect the variety of expressive styles. The flexible use of various discourse modes should be important evidence of language proficiency.", "cite_spans": [ { "start": 102, "end": 117, "text": "(Connors, 1981;", "ref_id": "BIBREF19" }, { "start": 118, "end": 142, "text": "Brooks and Warren, 1958)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "According to the above thought, we propose the discourse mode identification task. In particular, we make the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We build a corpus of narrative essays written by Chinese students in native language. Sentence level discourse modes are annotated with acceptable inter-annotator agreement. Corpus analysis reveals the characteristics of discourse modes in several aspects, including discourse mode distribution, co-occurrence and transition patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We describe a multi-label neural sequence labeling approach for discourse mode identification so that the co-occurrence and transition preferences can be captured. Experimental results show that discourse modes can be identified with an average F1-score of 0.7, indicating that automatic discourse mode identification is feasible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We demonstrate the effectiveness of taking discourse modes into account for automatic essay scoring. A higher ratio of description and emotion expressing can indicate essay quality to a certain extent. Discourse modes can be potentially used as features for other NLP applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Discourse analysis is an important subfield of natural language processing (Webber et al., 2011) . Discourse is expected to be both cohesive and coherent. Many principles are proposed for discourse analysis, such as coherence relations (Hobbs, 1979; Mann and Thompson, 1988) , the centering theory for local coherence (Grosz et al., 1995) and topic-based text segmentation (Hearst, 1997) . In some domains, discourse can be segmented according to specific discourse elements (Hutchins, 1977; Teufel and Moens, 2002; Clerehan and Buchbinder, 2006; Song et al., 2015) . This paper focuses on discourse modes influenced by Smith (2003) . From the linguistic view of point, discourse modes are supposed to have different distributions of situation entity types such as event, state and generic (Smith, 2003; Mavridou et al., 2015) . Therefore, there is work on automatically labeling clause level situation entity types (Palmer et al., 2007; Friedrich et al., 2016) . Actually, situation entity type identification is also a challenging problem. It is even harder for processing Chinese language, since Chinese doesn't have grammatical tense (Xue and Zhang, 2014) and sentence components are often omitted. This increases the difficulties for situation entity type based discourse mode identification. In this paper, we investigate an end-to-end approach to directly model discourse modes without the necessity of identifying situation entity types first.", "cite_spans": [ { "start": 75, "end": 96, "text": "(Webber et al., 2011)", "ref_id": "BIBREF49" }, { "start": 236, "end": 249, "text": "(Hobbs, 1979;", "ref_id": "BIBREF27" }, { "start": 250, "end": 274, "text": "Mann and Thompson, 1988)", "ref_id": "BIBREF36" }, { "start": 318, "end": 338, "text": "(Grosz et al., 1995)", "ref_id": "BIBREF25" }, { "start": 373, "end": 387, "text": "(Hearst, 1997)", "ref_id": "BIBREF26" }, { "start": 475, "end": 491, "text": "(Hutchins, 1977;", "ref_id": "BIBREF29" }, { "start": 492, "end": 515, "text": "Teufel and Moens, 2002;", "ref_id": "BIBREF48" }, { "start": 516, "end": 546, "text": "Clerehan and Buchbinder, 2006;", "ref_id": "BIBREF16" }, { "start": 547, "end": 565, "text": "Song et al., 2015)", "ref_id": "BIBREF45" }, { "start": 620, "end": 632, "text": "Smith (2003)", "ref_id": "BIBREF44" }, { "start": 790, "end": 803, "text": "(Smith, 2003;", "ref_id": "BIBREF44" }, { "start": 804, "end": 826, "text": "Mavridou et al., 2015)", "ref_id": "BIBREF37" }, { "start": 916, "end": 937, "text": "(Palmer et al., 2007;", "ref_id": "BIBREF40" }, { "start": 938, "end": 961, "text": "Friedrich et al., 2016)", "ref_id": "BIBREF23" }, { "start": 1138, "end": 1159, "text": "(Xue and Zhang, 2014)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Analysis", "sec_num": "2.1" }, { "text": "Automatic writing assessment is an important application of natural language processing. The task aims to let computers have the ability to appreciate and criticize writing. It would be hugely beneficial for applications like automatic essay scoring (AES) and content recommendation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatic Writing Assessment", "sec_num": "2.2" }, { "text": "AES is the task of building a computer-aided scoring system, in order to reduce the involvement of human raters. Traditional approaches are based on supervised learning with designed feature templates (Larkey, 1998; Burstein, 2003; Attali and Burstein, 2006; Chen and He, 2013; Phandi et al., 2015; Cummins et al., 2016) . Recently, automatic feature learning based on neural networks starts to draw attentions (Alikaniotis et al., 2016; Dong and Zhang, 2016; Taghipour and Ng, 2016) .", "cite_spans": [ { "start": 201, "end": 215, "text": "(Larkey, 1998;", "ref_id": "BIBREF33" }, { "start": 216, "end": 231, "text": "Burstein, 2003;", "ref_id": "BIBREF9" }, { "start": 232, "end": 258, "text": "Attali and Burstein, 2006;", "ref_id": "BIBREF2" }, { "start": 259, "end": 277, "text": "Chen and He, 2013;", "ref_id": "BIBREF12" }, { "start": 278, "end": 298, "text": "Phandi et al., 2015;", "ref_id": "BIBREF42" }, { "start": 299, "end": 320, "text": "Cummins et al., 2016)", "ref_id": "BIBREF20" }, { "start": 411, "end": 437, "text": "(Alikaniotis et al., 2016;", "ref_id": "BIBREF0" }, { "start": 438, "end": 459, "text": "Dong and Zhang, 2016;", "ref_id": "BIBREF21" }, { "start": 460, "end": 483, "text": "Taghipour and Ng, 2016)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Writing Assessment", "sec_num": "2.2" }, { "text": "Writing assessment involves highly technical aspects of language and discourse. In addition to give a score, it would be better to provide explainable feedbacks to learners at the same time. Some work has studied several aspects such as spelling errors (Brill and Moore, 2000) , grammar errors (Rozovskaya and Roth, 2010) , coherence (Barzilay and Lapata, 2008) , organization of argumentative essays (Persing et al., 2010) and the use of figurative language (Louis and Nenkova, 2013) . This paper extends this line of work by taking discourse modes into account.", "cite_spans": [ { "start": 253, "end": 276, "text": "(Brill and Moore, 2000)", "ref_id": "BIBREF7" }, { "start": 294, "end": 321, "text": "(Rozovskaya and Roth, 2010)", "ref_id": "BIBREF43" }, { "start": 334, "end": 361, "text": "(Barzilay and Lapata, 2008)", "ref_id": "BIBREF4" }, { "start": 401, "end": 423, "text": "(Persing et al., 2010)", "ref_id": "BIBREF41" }, { "start": 459, "end": 484, "text": "(Louis and Nenkova, 2013)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Automatic Writing Assessment", "sec_num": "2.2" }, { "text": "A main challenge of discourse analysis is hard to collect large scale data due to its complexity, which may lead to data sparseness problem. Recently, neural networks become popular for natural language processing (Bengio et al., 2003; Collobert et al., 2011) . One of the advantages is the ability of automatic representation learning. Representing words or relations with continuous vectors (Mikolov et al., 2013; Ji and Eisenstein, 2014) embeds semantics in the same space, which benefits alleviating the data sparseness problem and enables end-to-end and multi-task learning. Recurrent neural networks (RNNs) (Graves, 2012) and the variants like Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Gated Recurrent (GRU) (Cho et al., 2014) neural networks show good performance for capturing long distance dependencies on tasks like Named Entity Recognition (NER) (Chiu and Nichols, 2016; Ma and Hovy, 2016) , dependency parsing (Dyer et al., 2015) and semantic composition of documents (Tang et al., 2015) . This work describes a hierarchical neural architecture with multiple label outputs for modeling the discourse mode sequence of sentences.", "cite_spans": [ { "start": 214, "end": 235, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF5" }, { "start": 236, "end": 259, "text": "Collobert et al., 2011)", "ref_id": "BIBREF18" }, { "start": 393, "end": 415, "text": "(Mikolov et al., 2013;", "ref_id": "BIBREF39" }, { "start": 416, "end": 440, "text": "Ji and Eisenstein, 2014)", "ref_id": "BIBREF30" }, { "start": 613, "end": 627, "text": "(Graves, 2012)", "ref_id": "BIBREF24" }, { "start": 680, "end": 714, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF28" }, { "start": 741, "end": 759, "text": "(Cho et al., 2014)", "ref_id": "BIBREF14" }, { "start": 909, "end": 927, "text": "Ma and Hovy, 2016)", "ref_id": "BIBREF35" }, { "start": 949, "end": 968, "text": "(Dyer et al., 2015)", "ref_id": "BIBREF22" }, { "start": 1007, "end": 1026, "text": "(Tang et al., 2015)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Sequence Modeling", "sec_num": "2.3" }, { "text": "We are interested in the use of discourse modes in writing composition. This section describes the discourse modes we are going to study, an annotated corpus of student essays and what we learn from corpus analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Mode Annotation", "sec_num": "3" }, { "text": "Discourse modes have several taxonomies in the literature. Four basic discourse modes are narration, description, exposition and argument in English composition and rhetoric (Bain, 1890) . Smith (2003) proposes five modes for studying discourse passages:", "cite_spans": [ { "start": 174, "end": 186, "text": "(Bain, 1890)", "ref_id": "BIBREF3" }, { "start": 189, "end": 201, "text": "Smith (2003)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "narrative, description, report, information and argument. In Chinese composition, discourse modes are categorized into narration, description, exposition, argument and emotion expressing (Zhu, 1983) .", "cite_spans": [ { "start": 187, "end": 198, "text": "(Zhu, 1983)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "These taxonomies are similar. Their elements can mostly find corresponding ones in other taxonomies literally or conceptually, e.g., exposition mode has similar functions to information mode. Emotion expressing that is to express the writer's emotions is relatively special. It can be realized by expressing directly or through lyrical writing with beautiful and poetic language. It is also related to appeal to emotion, which is a method for argumentation by the manipulation of the recipient's emotions in classical rhetoric (Aristotle and Kennedy, 2006) . Proper emotion expressing can touch the hearts of the readers and improve the expressiveness of writing. Therefore, considering it as an independent mode is also reasonable.", "cite_spans": [ { "start": 527, "end": 556, "text": "(Aristotle and Kennedy, 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "We cope with essays written in Chinese in this work so that we follow the Chinese convention with five discourse modes. Emotion expressing is added on the basis of four recognized discourse modes and Smith's report mode is viewed as a subtype of description mode: dialogue description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "In summary, we study the following discourse modes:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "\u2022 Narration introduces an event or series of events into the universe of discourse. The events are temporally related according to narrative time. E.g., Last year, we drove to San Francisco along the State Route 1 (SR 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "\u2022 Exposition has a function to explain or instruct. It provides background information in narrative context. The information presented should be general and (expected to be) well accepted truth. E.g., SR 1 is a major north-south state highway that runs along most of the Pacific coastline of the U.S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "\u2022 Description re-creates, invents, or vividly show what things are like according to the five senses so that the reader can picture that which is being described. E.g., Along SR 1 are stunning rugged coastline, coastal forests and cliffs, beautiful little towns and some of the West coast's most amazing nature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "\u2022 Argument makes a point of view and proves its validity towards a topic in order to convince or persuade the reader. E.g., Point Arena Lighthouse is a must see along SR 1, in my opinion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "\u2022 Emotion expressing 1 presents the writer's emotions, usually in a subjective, personal and lyrical way, to involve the reader to experience the same situations and to be touched. E.g., I really love the ocean, the coastline and all the amazing scenery along the route.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "3.1" }, { "text": "The distinction between discourse modes is expected to be clarified conceptually by considering their different communication purposes. However, there would still be specific ambiguous and vague cases. We will describe the data annotation and corpus analysis in the following parts. Table 1 : Inter-annotator agreement between two annotators on the dominant discourse mode. Initial: The result of the first round annotation; Final: The result of the final annotation; \u03ba: Agreement measured with Cohen's Kappa.", "cite_spans": [], "ref_spans": [ { "start": 283, "end": 290, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "When could I come back again?", "sec_num": null }, { "text": "Discourse modes are almost never found in a pure form but are embedded one within another to help the writer achieve the purpose, but the emphasis varies in different types of writing. We focus on narrative essays. A good narrative composition must properly manipulate multiple discourse modes to make it vivid and impressive. The corpus has 415 narrative essays written by high school students in their native Chinese language.The average number of sentences is 32 and the average length is 670 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Annotation", "sec_num": "3.2" }, { "text": "We invited two high school teachers to annotate discourse modes at sentence level, expecting their background help for annotation. A detail manual was discussed before annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Annotation", "sec_num": "3.2" }, { "text": "We notice that discourse modes can mix in the same sentence. Therefore, the annotation standard allows that one sentence can have multiple modes. But we require that every sentence should have a dominant mode. The annotators should try to think in the writer's perspective and guess the writer's main purpose of writing the sentence in order to decide the dominant mode.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Annotation", "sec_num": "3.2" }, { "text": "Among the discourse modes, description can be applied in various situations. We focus on the following description types: portrait, appearance, action, dialogue, psychological, environment and detail description. If a sentence has any type of description, it would be assigned a description label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Annotation", "sec_num": "3.2" }, { "text": "We conducted corpus analysis on the annotated data to gain observations on several aspects. Inter-Annotator Agreement: 50 essays were independently annotated by two annotators. We evaluate the inter-annotator agreement on the dom- inant mode. The two annotators' annotations are used as the golden answer and prediction respectively. We compute the precision, recall and F1score for each discourse mode separately to measure the inter-annotator agreement. Precision and recall are symmetric for the two annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "The result of the first round annotation is shown in the INITIAL columns of Table 1 . The agreement on argument mode is low, while the agreement on other modes is acceptable. The average F1-score is 0.69. The Cohen's Kappa (Cohen et al., 1960) is 0.55 over all judgements on the dominant mode.", "cite_spans": [ { "start": 223, "end": 243, "text": "(Cohen et al., 1960)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "The main disagreement on argument lies in the confusion with emotion expressing. Consider the following sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "Father's love is the fire that lights the lamp of hope.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "One annotator thought that it is expressed in an emotional and lyrical way so that the discourse mode should be emotion expressing. The other one thought that it (implicitly) gives a point and should be an argument. Many disagreements happened in cases like this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "Based on the observations of the first round annotation, we discussed and updated the manual and let the annotators rechecked their annotations. The final result is shown in the FINAL columns of Table 1 . The agreement on description decreases. Annotators seem to be more conservative on labeling description as the dominant mode. The overall average F1-score increases to 0.78 and the Cohen's Kappa is 0.72. This indicates that humans can reach an acceptable agreement on the dominant discourse mode of sentences after training. Discourse mode distribution: After the training phase, the annotators labeled the whole corpus. Figure 1 shows the distribution of dominant Nar 5285 11 2552 65 2 Exp -148 11 1 1 Des --2538 105 8 Emo ---1947 63 Arg ----318 have two discourse modes, and 3% have more than two discourse modes. Table 2 shows the co-occurrence of discourse modes. The numbers that are in the diagonal represent the distribution of discourse modes of sentences with only one mode. The numbers that are not in the diagonal indicate the co-occurrence of modes in the same sentences. We can see that description tends to co-occur with narration and emotion expressing. Description can provide states that happen together with events and emotion-evoking scenes are often described to elicit a strong emotional response, for example:", "cite_spans": [], "ref_spans": [ { "start": 195, "end": 202, "text": "Table 1", "ref_id": null }, { "start": 626, "end": 634, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 670, "end": 770, "text": "Nar 5285 11 2552 65 2 Exp -148 11 1 1 Des --2538 105 8 Emo ---1947 63 Arg ----318", "ref_id": "TABREF2" }, { "start": 840, "end": 847, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "The bright moon hanging on the distant sky reminds me of my hometown miles away.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "Emotion expressing and argument also co-occur in some cases. It is reasonable, since a successful emotional appeal can enhance the effectiveness of an argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "Generally, these observations are consistent with intuition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "Properly combining multiple modes could produce impressive sentences. Table 3 shows the transition matrix between the dominant modes of consecutive sentences within the same paragraphs. All modes tend to transit to themselves except exposition, which is rare and usually brief. This means that discourse modes of adjacent sentences have high correlation. We also see that narration and emotion are more often at the beginning and the end of essays. The above observations indicate that discourse modes have local preferred patterns.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Corpus Analysis", "sec_num": "3.3" }, { "text": "To summarize, the implications of corpus analysis include: (1) Manual identification of discourse modes is feasible with an acceptable inter-annotator agreement; (2) The distribution of discourse modes in narrative essays is imbalanced;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition:", "sec_num": null }, { "text": "(3) About 22% sentences have multiple discourse modes; (4) Discourse modes have local transition patterns that consecutive discourse modes have high correlation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition:", "sec_num": null }, { "text": "This section describes the proposed method for discourse mode identification. According to the corpus analysis, sentences often have multiple discourse modes and prefer local transition patterns. Therefore, we view this task as a multi-label sequence labeling problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Mode Identification based on Neural Sequence Labeling", "sec_num": "4" }, { "text": "We propose a hierarchical neural sequence labeling model to capture multiple level information. Figure 2(a) shows the basic architecture. We introduce it from the bottom up. Word level embedding layer: We transform words into continuous vectors, word embeddings. Vector representation of words is useful for capturing semantic relatedness. This should be effective in our case, since large amount of training data is not available. It is unrealistic to learn the embedding parameters on limited data so that we just look up embeddings of words from a pre-trained word embedding table. The pre-trained word embeddings were learned with the Word2Vec toolkit (Mikolov et al., 2013) on a domain corpus which consists of about 490,000 student essays. The embeddings are kept unchanged during learning and prediction. Sentence level GRU layer: Each sentence is a sequence of words. We feed the word embeddings into a forward recurrent neural networks. Here, (b) The detail of the Mul-Label layer Figure 2 : The multi-label neural sequence labeling model for discourse mode identification.", "cite_spans": [ { "start": 656, "end": 678, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF39" } ], "ref_spans": [ { "start": 96, "end": 107, "text": "Figure 2(a)", "ref_id": null }, { "start": 990, "end": 998, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "4.1" }, { "text": "we use the GRU (Cho et al., 2014) as the recurrent unit. The GRU is to make each recurrent unit to adaptively capture dependencies of different time scales. The output of the last time-step is used as the representation of a sentence. Discourse level bidirectional-GRU layer: An essay consists of a sequence of sentences. Accessing information of past and future sentences provides more contextual information for current prediction. Therefore, we use a bidirectional RNN to connect sentences. We use the GRU as the recurrent unit, which is also shown effective on semantic composition of documents for sentiment classification (Tang et al., 2015) . The BiGRU represents the concatenation of the hidden states of the forward GRU and the backward GRU units.", "cite_spans": [ { "start": 15, "end": 33, "text": "(Cho et al., 2014)", "ref_id": "BIBREF14" }, { "start": 628, "end": 647, "text": "(Tang et al., 2015)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "4.1" }, { "text": "Since one sentence can have more than one discourse mode, our model allows multiple label outputs. Figure 2 (b) details the Mul-Label layer in Figure 2(a) . The representation of each sentence after the bidirectional-GRU layer is first fully connected to a hidden layer. The hidden layer output is then fully connected to a five-way output layer, corresponding to five discourse modes. The sigmoid activation function is applied to each way to get the probability that whether corresponding discourse mode should be assigned to the sentence. In the training phase, the probability of any labeled discourse modes is set to 1 and the others are set to 0. In the prediction phase, if the predicted probability of a discourse mode is larger than 0.5, the discourse mode would be assigned.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Figure 2", "ref_id": null }, { "start": 143, "end": 154, "text": "Figure 2(a)", "ref_id": null } ], "eq_spans": [], "section": "Multi-Label layer:", "sec_num": null }, { "text": "Different from NER that processes a single sentence each time, our task processes sequences of sentences in discourse, which are usually grouped by paragraphs to split the whole discourse into several relatively independent segments. Sentences from different paragraphs should have less effect to each other, even though they are adjacent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Considering Paragraph Boundaries", "sec_num": "4.1.1" }, { "text": "To capture paragraph boundary information, we insert an empty sentence at the end of every paragraph to indicate a paragraph boundary. The empty sentence is represented by a zero vector and its outputs are set to zeros as well. We expect this modification can better capture position related information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Considering Paragraph Boundaries", "sec_num": "4.1.1" }, { "text": "We implement the model using the Keras library. 2 The models are trained with the binary cross-entropy objective. The optimizer is Adam (Kingma and Ba, 2014) . The word embedding dimension is 50. The dimension of the hidden layer in Mul-Label layer is 100. The length of sentences is fixed as 40. All other parameters are set by default parameter values. We adopt early stopping strategy (Caruana et al., 2000) to decide when the training process stops.", "cite_spans": [ { "start": 48, "end": 49, "text": "2", "ref_id": null }, { "start": 136, "end": 157, "text": "(Kingma and Ba, 2014)", "ref_id": "BIBREF32" }, { "start": 388, "end": 410, "text": "(Caruana et al., 2000)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.2" }, { "text": "We use 100 essays as the test data. The remaining ones are used as the training data. 10% of the shuffled training data is used for validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "4.3.1" }, { "text": "We compare the following systems:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "4.3.2" }, { "text": "\u2022 SVM: We use bag of ngram (unigram and bigram) features to train a support vector classifier for sentence classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "4.3.2" }, { "text": "\u2022 CNN: We implement a convolutional neural network (CNN) based method (Kim, 2014) , as it is the state-of-the-art for sentence classification.", "cite_spans": [ { "start": 70, "end": 81, "text": "(Kim, 2014)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "4.3.2" }, { "text": "\u2022 GRU: We use the sentence level representation in Figure 2 (a) for sentence classification.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 59, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Comparisons", "sec_num": "4.3.2" }, { "text": "\u2022 GRU-GRU(GG): This method is introduced in this paper in \u00a74.1, but it doesn't consider paragraph information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "4.3.2" }, { "text": "\u2022 GRU-GRU-SEG (GG-SEG): The model considers paragraph information on the top of G-G as introduced in \u00a74.1.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "4.3.2" }, { "text": "The first three classification based methods classify sentences independently. To deal with multiple labels, the classifiers are trained for each discourse mode separately. At prediction time, if the classifier for any discourse mode predicts a sentence as positive, the corresponding discourse mode would be assigned. Table 4 shows the experimental results. We evaluate the systems for each discourse mode with F1score, which is the harmonic mean of precision and recall. The best performance is in bold.", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 326, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Comparisons", "sec_num": "4.3.2" }, { "text": "The SVM performs worst among all systems. The reason is due to the data sparseness and termmismatch problem, since the size of the annotated dataset is not big enough. In contrast, systems based on neural networks with pre-trained word embeddings achieve much better performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3.3" }, { "text": "The CNN and GRU have comparable performance. The GRU is slightly better. The two methods don't consider the semantic representations of adjacent sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3.3" }, { "text": "The GG and GG-SEG explore the semantic information of sentences in a sequence by the bidirectional GRU layer. The results demonstrate that considering such information improve the performance on all discourse modes. This proves the advantage of sequential identification compared with isolated sentence classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3.3" }, { "text": "We can see that the GG-SEG further improves the performance on three minority discourse modes compared with GG. This means that the minority modes may have stronger preference to special locations. Exposition benefits most, since many exposition sentences in our dataset are isolated. The performance on argument is not so good. As we discussed in corpus analysis, argument and emotion expressing mode interact frequently. Because the amount of emotion expressing sentences is much more, distinguishing argument from them is hard. Actually, their functions in narrative essays seem to be similar that both are to deepen the author's response or evoke the reader's response to the story.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3.3" }, { "text": "The overall average F1-score can reach to 0.7 and the performance on identifying three most common discourse modes are consistent, with an average F1-score above 0.76 using the proposed neural sequence labeling models. Automatic discourse mode identification should be feasible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3.3" }, { "text": "Discourse mode identification can potentially provide features for downstream NLP applications. This section describes our attempt to explore discourse modes for automatic essay scoring (AES).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Essay Scoring with Discourse Modes", "sec_num": "5" }, { "text": "We adopt the standard regression framework for essay scoring. We use support vector regression (SVR) and Bayesian linear ridge regression (BLR-R), which are used in recent work (Phandi et al., 2015) . The key is to design effective features.", "cite_spans": [ { "start": 177, "end": 198, "text": "(Phandi et al., 2015)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Essay Scoring Framework", "sec_num": "5.1" }, { "text": "The basic feature sets are based on (Phandi et al., 2015) .The original feature sets include:", "cite_spans": [ { "start": 36, "end": 57, "text": "(Phandi et al., 2015)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "\u2022 Length features \u2022 Part-Of-Speech (POS) features \u2022 Prompt features \u2022 Bag of words features", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "We re-implement the feature extractors exactly according to the description in (Phandi et al., 2015 ) except for the POS features, since we don't Score Prompt #Essays Avg. len Range Median 1 4000 628 0-60 46 2 4000 660 0-50 41 3 3300 642 0-50 41 Table 5 : Details of the three datasets for AES.", "cite_spans": [ { "start": 79, "end": 99, "text": "(Phandi et al., 2015", "ref_id": "BIBREF42" } ], "ref_spans": [ { "start": 176, "end": 270, "text": "Range Median 1 4000 628 0-60 46 2 4000 660 0-50 41 3 3300 642 0-50 41 Table 5", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "have correct POS ngrams for Chinese. We complement two additional features: (1) The number of words occur in Chinese Proficiency Test 6 vocabulary;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "(2) The number of Chinese idioms used. We further design discourse mode related features for each essay:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "\u2022 Mode ratio: For each discourse mode, we compute its mode ratio according to ratio = #sentences with the discourse mode #sentences in the essay . Such features indicate the distribution of discourse modes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "\u2022 Bag of ngrams of discourse modes: We use the number of unigrams and bigrams of the dominant discourse modes of the sequence of sentences in the essay as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features", "sec_num": "5.2" }, { "text": "The experiments were conducted on narrative essays written by Chinese middle school students in native language during regional tests. There are three prompts and students are required to write an essay related to the given prompt with no less than 600 Chinese characters. All these essays were evaluated by professional teachers. We randomly sampled essays from each prompt for experiments. Table 5 shows the details of the datasets. We ran experiments on each prompt dataset respectively by 5-fold cross-validation.", "cite_spans": [], "ref_spans": [ { "start": 392, "end": 399, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.3" }, { "text": "The GG-SEG model was used to identify discourse modes of sentences. Notice that a sentence can have multiple discourse modes. The mode ratio features are computed for each mode separately. When extracting the bag of ngrams of discourse modes features, the discourse mode with highest prediction probability was chosen as the dominant discourse mode.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.3" }, { "text": "We use the Quadratic Weighted Kappa (QWK) as the evaluation metric. Table 6 shows the evaluation results of AES on three datasets. We can see that the BLRR algorithm performs better than the SVR algorithm. No matter which algorithm is adopted, adding discourse mode features make positive contributions for AES compared with using basic feature sets. The trends are consistent over all three datasets.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "5.3" }, { "text": "We are interested in which discourse mode correlates to essay scores best. Table 7 shows the Pearson correlation coefficient between the mode ratio and essay score. LEN represents the correlation of essay length and is listed as a reference. We can see that the ratio of narration has a negative correlation, which means just narrating stories without auxiliary discourse modes would lead to poor scores. The description mode ratio has the strongest positive correlation to essay scores. This may indicate that using vivid language to provide detail information is essential in writing narrative essays. Emotion expressing also has a positive correlation. It is reasonable since emotional writing can involve readers into the stories. The ratio of argument shows a negative correlation. The reason may be that: first, the identification of argument is not good enough; second, the existence of an argument doesn't mean the quality of argumentation is good. Exposition has little effect on essay scores. Generally, the distribution of discourse modes shows correlations to the quality of essays. This may relate to the difficulties of manipulating different discourse modes. It is easy for students to use narration, but it is more difficult to manipulate description and emotion expressing well. As a result, the ability of descriptive and emotional writ- ing should be an indicator of language proficiency and can better distinguish the quality of writing.", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Impact of discourse mode ratio on scores:", "sec_num": null }, { "text": "It is easy to understand that length is a strong indicator for essay scoring. It is interesting to study that when the effect of length becomes weaker, e.g., the lengths of essays are close, how does the performance of the AES system change?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impact on scoring essays with various length:", "sec_num": null }, { "text": "We conducted experiments on essays with various lengths. Only essays that the length is no less than a given threshold are selected for evaluation. The threshold is set to 100, 200, 400 and 600 Chinese characters respectively. We ran 5-fold crossvalidation with BLRR on the datasets after essay selection. Figure 3 shows the results on three datasets. We can see the following trends: (1) The QWK scores decrease along with shorter essays are removed gradually; (2) Adding discourse mode features always improves the performance; (3) As the threshold becomes larger, the improvements by adding discourse mode features become larger.", "cite_spans": [], "ref_spans": [ { "start": 306, "end": 314, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Impact on scoring essays with various length:", "sec_num": null }, { "text": "The results indicate that the current AES system can achieve a high correlation score when the lengths of essays differ obviously. Even the simple features like length can judge that short essays tend to have low scores. However, when the lengths of essays are close, AES would face greater challenges, because it is required to deeper understand the properties of well written essays. In such situations, features that can model more advanced aspects of writing, such as discourse modes, should play a more important role. It should be also essential for evaluating essays written in the native language of the writer, when spelling and grammar are not big issues any more.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impact on scoring essays with various length:", "sec_num": null }, { "text": "This paper has introduced a fundamental but less studied task in NLP-discourse mode identification, which is designed in this work to automatically identify five discourse modes in essays.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "A corpus of narrative student essays was manually annotated with discourse modes at sentence level, with acceptable inter-annotator agreement. The corpus analysis revealed several aspects of characteristics of discourse modes including the distribution, co-occurrence and transition patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Considering these characteristics, we proposed a neural sequence labeling approach for identifying discourse modes. The experimental results demonstrate that automatic discourse mode identification is feasible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "We evaluated discourse mode features for automatic essay scoring and draw preliminary observations. Discourse mode features can make positive contributions, especially in challenging situations when simple surface features don't work well. The ratio of description and emotion expressing is shown to be positively correlated to essay scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In future, we plan to exploit discourse mode identification for providing novel features for more downstream NLP applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In some cases, we use emotion for short.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/fchollet/keras/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatic text scoring using neural networks", "authors": [ { "first": "Dimitrios", "middle": [], "last": "Alikaniotis", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Marek", "middle": [], "last": "Rei", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL 2016", "volume": "", "issue": "", "pages": "715--725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neu- ral networks. In Proceedings of ACL 2016. pages 715-725.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On rhetoric: A theory of civic discourse", "authors": [ { "first": "Omer", "middle": [], "last": "Aristotle", "suffix": "" }, { "first": "George A", "middle": [], "last": "Kennedy", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Aristotle and George A Kennedy. 2006. On rhetoric: A theory of civic discourse. Oxford Uni- versity Press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Automated essay scoring with e-rater R v. 2. The Journal of Technology", "authors": [ { "first": "Yigal", "middle": [], "last": "Attali", "suffix": "" }, { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" } ], "year": 2006, "venue": "Learning and Assessment", "volume": "4", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater R v. 2. The Journal of Technol- ogy, Learning and Assessment 4(3).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "English composition and rhetoric", "authors": [ { "first": "Alexander", "middle": [], "last": "Bain", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Bain. 1890. English composition and rhetoric. Longmans, Green & Company.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Modeling local coherence: An entity-based approach", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "1", "pages": "1--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics 34(1):1-34.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of machine learning research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research 3(Feb):1137-1155.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Research in written composition", "authors": [ { "first": "Richard", "middle": [ "Reed" ], "last": "Braddock", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Lloyd-Jones", "suffix": "" }, { "first": "Lowell", "middle": [], "last": "Schoer", "suffix": "" } ], "year": 1963, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Reed Braddock, Richard Lloyd-Jones, and Lowell Schoer. 1963. Research in written compo- sition. JSTOR.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An improved error model for noisy channel spelling correction", "authors": [ { "first": "Eric", "middle": [], "last": "Brill", "suffix": "" }, { "first": "C", "middle": [], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Moore", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ACL 2000", "volume": "", "issue": "", "pages": "286--293", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Brill and Robert C Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of ACL 2000. pages 286-293.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Modern rhetoric", "authors": [ { "first": "Cleanth", "middle": [], "last": "Brooks", "suffix": "" }, { "first": "Robert", "middle": [ "Penn" ], "last": "Warren", "suffix": "" } ], "year": 1958, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cleanth Brooks and Robert Penn Warren. 1958. Mod- ern rhetoric. Harcourt, Brace.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The e-rater R scoring engine: Automated essay scoring with natural language processing", "authors": [ { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jill Burstein. 2003. The e-rater R scoring engine: Au- tomated essay scoring with natural language pro- cessing. .", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Finding the write stuff: Automatic identification of discourse structure in student essays. Intelligent Systems", "authors": [ { "first": "Jill", "middle": [], "last": "Burstein", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2003, "venue": "IEEE", "volume": "18", "issue": "1", "pages": "32--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jill Burstein, Daniel Marcu, and Kevin Knight. 2003. Finding the write stuff: Automatic identification of discourse structure in student essays. Intelligent Systems, IEEE 18(1):32-39.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Lee", "middle": [], "last": "Giles", "suffix": "" } ], "year": 2000, "venue": "Proceedings of NIPS 2000", "volume": "", "issue": "", "pages": "402--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana, Steve Lawrence, and Lee Giles. 2000. Overfitting in neural nets: Backpropagation, conju- gate gradient, and early stopping. In Proceedings of NIPS 2000. pages 402-408.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automated essay scoring by maximizing human-machine agreement", "authors": [ { "first": "Hongbo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ben", "middle": [], "last": "He", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP 2013", "volume": "", "issue": "", "pages": "1741--1752", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongbo Chen and Ben He. 2013. Automated essay scoring by maximizing human-machine agreement. In Proceedings of EMNLP 2013. pages 1741-1752.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Named entity recognition with bidirectional lstm-cnns", "authors": [ { "first": "P", "middle": [ "C" ], "last": "Jason", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "357--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguis- tics 4:357-370.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merri\u00ebnboer Caglar Gulcehre", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP 2014", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP 2014. pages 1724-1734.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The handbook of computational linguistics and natural language processing", "authors": [ { "first": "Alexander", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Fox", "suffix": "" }, { "first": "Shalom", "middle": [], "last": "Lappin", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Clark, Chris Fox, and Shalom Lappin. 2013. The handbook of computational linguistics and nat- ural language processing. John Wiley & Sons.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Toward a more valid account of functional text quality: The case of the patient information leaflet", "authors": [ { "first": "Rosemary", "middle": [], "last": "Clerehan", "suffix": "" }, { "first": "Rachelle", "middle": [], "last": "Buchbinder", "suffix": "" } ], "year": 2006, "venue": "Discourse Communication Studies", "volume": "26", "issue": "1", "pages": "39--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosemary Clerehan and Rachelle Buchbinder. 2006. Toward a more valid account of functional text qual- ity: The case of the patient information leaflet. Tex- t & Talk-An Interdisciplinary Journal of Language, Discourse Communication Studies 26(1):39-68.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A coefficient of agreement for nominal scales", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "Educational and psychological measurement", "volume": "20", "issue": "1", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Cohen et al. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement 20(1):37-46.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493-2537.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The rise and fall of the modes of discourse", "authors": [ { "first": "J", "middle": [], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Connors", "suffix": "" } ], "year": 1981, "venue": "College Composition and Communication", "volume": "32", "issue": "4", "pages": "444--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert J Connors. 1981. The rise and fall of the modes of discourse. College Composition and Communi- cation 32(4):444-455.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Constrained multi-task learning for automated essay scoring", "authors": [ { "first": "Ronan", "middle": [], "last": "Cummins", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Briscoe", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL 2016", "volume": "", "issue": "", "pages": "789--799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Cummins, Meng Zhang, and Ted Briscoe. 2016. Constrained multi-task learning for automated essay scoring. In Proceedings of ACL 2016. pages 789- 799.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Automatic features for essay scoring -an empirical study", "authors": [ { "first": "Fei", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP 2016", "volume": "", "issue": "", "pages": "1072--1077", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Dong and Yue Zhang. 2016. Automatic features for essay scoring -an empirical study. In Proceedings of EMNLP 2016. pages 1072-1077.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Transitionbased dependency parsing with stack long shortterm memory", "authors": [ { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL 2015", "volume": "", "issue": "", "pages": "334--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proceedings of ACL 2015. pages 334-343.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Situation entity types: automatic classification of clause-level aspect", "authors": [ { "first": "Annemarie", "middle": [], "last": "Friedrich", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL 2016", "volume": "", "issue": "", "pages": "1757--1768", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annemarie Friedrich, Alexis Palmer, and Manfred Pinkal. 2016. Situation entity types: automatic clas- sification of clause-level aspect. In Proceedings of ACL 2016. pages 1757-1768.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Supervised sequence labelling", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2012, "venue": "Supervised Sequence Labelling with Recurrent Neural Networks", "volume": "", "issue": "", "pages": "5--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves. 2012. Supervised sequence labelling. In Supervised Sequence Labelling with Recurrent Neu- ral Networks, Springer Berlin Heidelberg, pages 5- 13.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Centering: A framework for modeling the local coherence of discourse", "authors": [ { "first": "J", "middle": [], "last": "Barbara", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Grosz", "suffix": "" }, { "first": "Aravind K", "middle": [], "last": "Weinstein", "suffix": "" }, { "first": "", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 1995, "venue": "", "volume": "21", "issue": "", "pages": "203--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the lo- cal coherence of discourse. Computational linguis- tics 21(2):203-225.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Texttiling: Segmenting text into multi-paragraph subtopic passages", "authors": [ { "first": "A", "middle": [], "last": "Marti", "suffix": "" }, { "first": "", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1997, "venue": "Computational linguistics", "volume": "23", "issue": "1", "pages": "33--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational linguistics 23(1):33-64.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Coherence and coreference", "authors": [ { "first": "R", "middle": [], "last": "Jerry", "suffix": "" }, { "first": "", "middle": [], "last": "Hobbs", "suffix": "" } ], "year": 1979, "venue": "Cognitive science", "volume": "3", "issue": "1", "pages": "67--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerry R Hobbs. 1979. Coherence and coreference. Cognitive science 3(1):67-90.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "On the structure of scientific texts", "authors": [ { "first": "John", "middle": [], "last": "Hutchins", "suffix": "" } ], "year": 1977, "venue": "UEA Papers in Linguistics", "volume": "5", "issue": "3", "pages": "18--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Hutchins. 1977. On the structure of scientific texts. UEA Papers in Linguistics 5(3):18-39.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Representation learning for text-level discourse parsing", "authors": [ { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL 2014", "volume": "", "issue": "", "pages": "13--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangfeng Ji and Jacob Eisenstein. 2014. Represen- tation learning for text-level discourse parsing. In Proceedings of ACL 2014. pages 13-24.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP 2014", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP 2014. pages 1746-1751.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Automatic essay grading using text categorization techniques", "authors": [ { "first": "S", "middle": [], "last": "Leah", "suffix": "" }, { "first": "", "middle": [], "last": "Larkey", "suffix": "" } ], "year": 1998, "venue": "Proceedings of SIGIR 1998", "volume": "", "issue": "", "pages": "90--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leah S Larkey. 1998. Automatic essay grading using text categorization techniques. In Proceedings of SIGIR 1998. pages 90-95.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "What makes writing great? first experiments on article quality prediction in the science journalism domain. Transactions of the Association for", "authors": [ { "first": "Annie", "middle": [], "last": "Louis", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "1", "issue": "", "pages": "341--352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annie Louis and Ani Nenkova. 2013. What makes writing great? first experiments on article quality prediction in the science journalism domain. Trans- actions of the Association for Computational Lin- guistics 1:341-352.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL 2016", "volume": "", "issue": "", "pages": "1064--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of ACL 2016. pages 1064-1074.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse", "authors": [ { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "", "volume": "8", "issue": "", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-Interdisciplinary Jour- nal for the Study of Discourse 8(3):243-281.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Linking discourse modes and situation entity types in a cross-linguistic corpus study", "authors": [ { "first": "Kleio-Isidora", "middle": [], "last": "Mavridou", "suffix": "" }, { "first": "Annemarie", "middle": [], "last": "Friedrich", "suffix": "" }, { "first": "Melissa", "middle": [ "Peate" ], "last": "S\u00f8rensen", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2015, "venue": "Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kleio-Isidora Mavridou, Annemarie Friedrich, Melis- sa Peate S\u00f8rensen, Alexis Palmer, and Manfred Pinkal. 2015. Linking discourse modes and situa- tion entity types in a cross-linguistic corpus study. In Workshop on Linking Models of Lexical, Sen- tential and Discourse-level Semantics (LSDSem). page 12.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Genre as diversity, and rhetorical mode as unity in language use", "authors": [ { "first": "Meurer", "middle": [], "last": "Jos\u00e9 Luiz", "suffix": "" } ], "year": 2002, "venue": "Ilha do Desterro A Journal of English Language, Literatures in English and Cultural Studies", "volume": "", "issue": "43", "pages": "61--082", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9 Luiz Meurer. 2002. Genre as diversity, and rhetor- ical mode as unity in language use. Ilha do Desterro A Journal of English Language, Literatures in En- glish and Cultural Studies (43):061-082.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of NIPS 2013", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositionali- ty. In Proceedings of NIPS 2013. pages 3111-3119.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "A sequencing model for situation entity classification", "authors": [ { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Elias", "middle": [], "last": "Ponvert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Carlota", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL 2007", "volume": "", "issue": "", "pages": "896--903", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Palmer, Elias Ponvert, Jason Baldridge, and Carlota Smith. 2007. A sequencing model for sit- uation entity classification. In Proceedings of ACL 2007. pages 896-903.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Modeling organization in student essays", "authors": [ { "first": "Isaac", "middle": [], "last": "Persing", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of EMNLP 2010", "volume": "", "issue": "", "pages": "229--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Pro- ceedings of EMNLP 2010. pages 229-239.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Flexible domain adaptation for automated essay scoring using correlated linear regression", "authors": [ { "first": "Peter", "middle": [], "last": "Phandi", "suffix": "" }, { "first": "Ming", "middle": [ "A" ], "last": "Kian", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Chai", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2015, "venue": "Proceedings of EMNLP 2015", "volume": "", "issue": "", "pages": "431--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Phandi, Kian Ming A. Chai, and Hwee Tou Ng. 2015. Flexible domain adaptation for automated es- say scoring using correlated linear regression. In Proceedings of EMNLP 2015. pages 431-439.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Generating confusion sets for context-sensitive error correction", "authors": [ { "first": "Alla", "middle": [], "last": "Rozovskaya", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2010, "venue": "Proceedings of EMNLP 2010", "volume": "", "issue": "", "pages": "961--970", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alla Rozovskaya and Dan Roth. 2010. Generating con- fusion sets for context-sensitive error correction. In Proceedings of EMNLP 2010. pages 961-970.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Modes of discourse: The local structure of texts", "authors": [ { "first": "S", "middle": [], "last": "Carlota", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "", "volume": "103", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlota S Smith. 2003. Modes of discourse: The local structure of texts, volume 103. Cambridge Universi- ty Press.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Discourse element identification in student essays based on global and local cohesion", "authors": [ { "first": "Wei", "middle": [], "last": "Song", "suffix": "" }, { "first": "Ruiji", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Lizhen", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of EMNLP 2015", "volume": "", "issue": "", "pages": "2255--2261", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Song, Ruiji Fu, Lizhen Liu, and Ting Liu. 2015. Discourse element identification in student essays based on global and local cohesion. In Proceedings of EMNLP 2015. pages 2255-2261.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "A neural approach to automated essay scoring", "authors": [ { "first": "Kaveh", "middle": [], "last": "Taghipour", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Proceedings of EMNLP 2016", "volume": "", "issue": "", "pages": "1882--1891", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceed- ings of EMNLP 2016. pages 1882-1891.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Document modeling with gated recurrent neural network for sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of EMNLP 2015", "volume": "", "issue": "", "pages": "1422--1432", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Documen- t modeling with gated recurrent neural network for sentiment classification. In Proceedings of EMNLP 2015. pages 1422-1432.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Summarizing scientific articles: experiments with relevance and rhetorical status", "authors": [ { "first": "Simone", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2002, "venue": "Computational linguistics", "volume": "28", "issue": "4", "pages": "409--445", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Teufel and Marc Moens. 2002. Summariz- ing scientific articles: experiments with relevance and rhetorical status. Computational linguistics 28(4):409-445.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Discourse structure and language technology. Natural Language Engineering", "authors": [ { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Egg", "suffix": "" }, { "first": "Valia", "middle": [], "last": "Kordoni", "suffix": "" } ], "year": 2011, "venue": "", "volume": "18", "issue": "", "pages": "437--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie Webber, Markus Egg, and Valia Kordoni. 2011. Discourse structure and language technology. Natu- ral Language Engineering 18(4):437-490.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Buy one get one free: Distant annotation of chinese tense, event type and modality", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Yuchen", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2014, "venue": "Proceedings of LREC 2014", "volume": "", "issue": "", "pages": "1412--1416", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue and Yuchen Zhang. 2014. Buy one get one free: Distant annotation of chinese tense, event type and modality. In Proceedings of LREC 2014. pages 1412-1416.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "\u00ce (An Introduction to Writing)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u00ce (An Introduction to Writ- ing). Wuhan, Hubei Educational Press.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "The distribution of dominant modes." }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "QWK scores on essays satisfying different length thresholds on three prompts. Basic: the basic feature sets; mode: discourse mode features." }, "TABREF2": { "html": null, "text": "Co-occurrence of discourse modes in the same sentences. The numbers in diagonal indicate the number of sentences with a single mode.", "num": null, "content": "
from \\ to NarNar 72%Exp Des Emo Arg -17% 7% 1%
Exp59% 8%8%16%6%
Des42%-53%3%-
Emo25% 2%4%66%1%
Arg27%-4%12% 54%
Begin with 50% 3%6%32%7%
End with12% 1%2%76%6%
", "type_str": "table" }, "TABREF3": { "html": null, "text": "", "num": null, "content": "", "type_str": "table" }, "TABREF6": { "html": null, "text": "The F1-scores of systems on each discourse mode.", "num": null, "content": "
", "type_str": "table" }, "TABREF7": { "html": null, "text": "Evaluation results of AES on three datasets. Basic: the basic feature sets; mode: discourse mode features.", "num": null, "content": "
Prompt123Avg
LEN0.590.520.450.52
Des0.230.240.240.24
Emo0.090.150.120.12
Exp-0.070.010.01-0.03
Arg-0.08 -0.06-0.1-0.08
Nar-0.11 -0.15 -0.12 -0.13
", "type_str": "table" }, "TABREF8": { "html": null, "text": "Pearson correlation coefficients of mode ratio to essay score. LEN represents essay length.", "num": null, "content": "", "type_str": "table" } } } }