{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:12:16.616847Z" }, "title": "An Annotated Dataset and Automatic Approaches for Discourse Mode Identification in Low-resource Bengali Language", "authors": [ { "first": "Salim", "middle": [], "last": "Sazzed", "suffix": "", "affiliation": { "laboratory": "", "institution": "Old Dominion University Norfolk", "location": { "region": "VA", "country": "USA" } }, "email": "ssazz001@odu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The modes of discourse aid in comprehending the convention and purpose of various forms of languages used during communication. In this study, we introduce a discourse mode annotated corpus for the low-resource Bengali (also referred to as Bengali) language. The corpus consists of sentence-level annotation of three discourse modes, narrative, descriptive, and informative of the text excerpted from a number of Bengali novels. We analyze the annotated corpus to expose various linguistic aspects of discourse modes, such as class distributions and average sentence lengths. To automatically determine the mode of discourse, we apply CML (classical machine learning) classifiers with ngram based statistical features and a finetuned BERT (Bidirectional Encoder Representations from Transformers) based language model. We observe that fine-tuned BERT-based model yields better results than CML classifiers. Our created discourse mode annotated dataset, the first of its kind in Bengali, and the evaluation, provide baselines for the automatic discourse mode identification in Bengali and can assist various downstream natural language processing tasks.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "The modes of discourse aid in comprehending the convention and purpose of various forms of languages used during communication. In this study, we introduce a discourse mode annotated corpus for the low-resource Bengali (also referred to as Bengali) language. The corpus consists of sentence-level annotation of three discourse modes, narrative, descriptive, and informative of the text excerpted from a number of Bengali novels. We analyze the annotated corpus to expose various linguistic aspects of discourse modes, such as class distributions and average sentence lengths. To automatically determine the mode of discourse, we apply CML (classical machine learning) classifiers with ngram based statistical features and a finetuned BERT (Bidirectional Encoder Representations from Transformers) based language model. We observe that fine-tuned BERT-based model yields better results than CML classifiers. Our created discourse mode annotated dataset, the first of its kind in Bengali, and the evaluation, provide baselines for the automatic discourse mode identification in Bengali and can assist various downstream natural language processing tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Discourse is the notion of conversation that is expressed through language. Based on Webber et al. (2012) , discourse indicates the relationship between states, events, or beliefs manifested within one or multiple sentences in a given mode of communication. Understanding discourse structures and identifying relationships between various modes can help downstream natural language processing tasks including text summarization (Li et al., 2016) , question answering (Verberne et al., 2007) , anaphora resolution (Hirst, 1981) , and machine translation (Li et al., 2014) .", "cite_spans": [ { "start": 85, "end": 105, "text": "Webber et al. (2012)", "ref_id": "BIBREF24" }, { "start": 428, "end": 445, "text": "(Li et al., 2016)", "ref_id": "BIBREF11" }, { "start": 467, "end": 490, "text": "(Verberne et al., 2007)", "ref_id": "BIBREF23" }, { "start": 513, "end": 526, "text": "(Hirst, 1981)", "ref_id": "BIBREF8" }, { "start": 553, "end": 570, "text": "(Li et al., 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The modes of discourse, also referred to as rhetorical modes, represent the variety, conventions, and purposes of the dominant types of language used in communication (both oral and written). The discourse modes have high importance while writing composition because they attribute to several factors that would affect the quality and coherence of a text. The combination and interaction of various discourse modes make a text organized and unified (Smith, 2003) . To give an example, the writer may start an expressing an event through narration, then provide details regarding using descriptive modes and establish ideas with argument. Discourse modes have also importance in rhetorical research as they are closely related to rhetoric (Connors, 1981) that provides guidelines for effectively expressing content.", "cite_spans": [ { "start": 449, "end": 462, "text": "(Smith, 2003)", "ref_id": "BIBREF21" }, { "start": 738, "end": 753, "text": "(Connors, 1981)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Researchers categorized modes of discourse into various categories (Rozakis, 2003; Song et al., 2017; Dhanwal et al., 2020) . Based on Rozakis (2003) , discourse modes can be classified into four categories, narration, description, exposition, and argument. Narration mode primarily focuses on governing the progression of the story by presenting and connecting events; exposition mode instructs or explains; the argument aims to provide a convincing or persuasive statement; description tries to provide detailed mentions of characters, objects, and scenery, in a figurative language. Song et al. (2017) categorized the mode of discourse into five categories, narration, exposition, description, argument and emotion expressing sentences in narrative essays, while Dhanwal et al. (2020) annotated discourse mode of short story into argumentative, narrative, descriptive, dialogic and informative categories. Although a piece of text can be labeled as a specific mode of discourse, it is not uncommon to have text snippets with multiple modes of discourse Song et al. (2017) where one of them possesses the dominant role.", "cite_spans": [ { "start": 67, "end": 82, "text": "(Rozakis, 2003;", "ref_id": "BIBREF14" }, { "start": 83, "end": 101, "text": "Song et al., 2017;", "ref_id": "BIBREF22" }, { "start": 102, "end": 123, "text": "Dhanwal et al., 2020)", "ref_id": "BIBREF7" }, { "start": 135, "end": 149, "text": "Rozakis (2003)", "ref_id": "BIBREF14" }, { "start": 586, "end": 604, "text": "Song et al. (2017)", "ref_id": "BIBREF22" }, { "start": 766, "end": 787, "text": "Dhanwal et al. (2020)", "ref_id": "BIBREF7" }, { "start": 1056, "end": 1074, "text": "Song et al. (2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although discourse structure and mode have a significant role in various downstream natural language processing tasks, research in this area is largely unexplored in Bengla. Although Bengali is the 7th most spoken language in the world 1 , NLP resources are scarcely available except few areas such as sentiment analysis (Sazzed and Jayarathna, 2019; Sazzed, 2020) or inappropriate textual content detection (Sazzed, 2021a,b,c) . Regarding discourse analysis, only a limited number of works performed research (Chatterjee and Chakraborty, 2019; Banerjee, 2010; Sarkar and Chatterjee, 2013; Das and Stede, 2018; Das et al., 2020) . However, to the best of our knowledge, no study related to automatic discourse mode identification has been carried out yet. Thus, in this study, we introduce an annotated dataset and present a set of techniques for the automatic identification of discourse modes.", "cite_spans": [ { "start": 321, "end": 350, "text": "(Sazzed and Jayarathna, 2019;", "ref_id": "BIBREF20" }, { "start": 351, "end": 364, "text": "Sazzed, 2020)", "ref_id": "BIBREF16" }, { "start": 408, "end": 427, "text": "(Sazzed, 2021a,b,c)", "ref_id": null }, { "start": 545, "end": 560, "text": "Banerjee, 2010;", "ref_id": "BIBREF0" }, { "start": 561, "end": 589, "text": "Sarkar and Chatterjee, 2013;", "ref_id": "BIBREF15" }, { "start": 590, "end": 610, "text": "Das and Stede, 2018;", "ref_id": "BIBREF4" }, { "start": 611, "end": 628, "text": "Das et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Following the rough guidelines provided by Smith (2003) and Dhanwal et al. (2020) for discourse mode annotation, we manually categorize a dataset of 3310 sentences from Bengali Novels into various discourse modes. The sentences are annotated in three modes of discourse, narrative, descriptive and informative. For automatic identification of the discourse mode, we extract word n-gram based features from the text and then employ several classical machine learning (CML) classifiers such as logistic regression (LR), support vector machine (SVM), random forest (RF). In addition, the transformer-based multilingual BERT language model is leveraged and fine-tuned for discourse mode determination. We observe that the multilingual BERT model yields better performance than the CML classifiers, although the difference is not substantial compared to LR or SVM.", "cite_spans": [ { "start": 43, "end": 55, "text": "Smith (2003)", "ref_id": "BIBREF21" }, { "start": 60, "end": 81, "text": "Dhanwal et al. (2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this study can be summarized as follows- \u2022 We analyze the annotated corpus to reveal attributes of text representing various discourse modes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": "1.1" }, { "text": "\u2022 We employ CML classifiers with n-gram based statistical features and a fine-tuned pre-trained language model for automatically identifying various modes of discourse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contributions", "sec_num": "1.1" }, { "text": "The data collection process starts with identifying a set of novels from Bengali literature. We select six 20th-century Bengali novels \u09c7\u0997\u09be\u09b2\u09c7\u09ae\u09c7\u09b2 \u09c7\u09b2\u09be\u0995, \u09aa\u09c7\u09a5\u09b0 \u09aa\u09be\u0981 \u099a\u09be\u09bf\u09b2, \u0986\u09b0\u09a3\u09af\u09cd\u0995,\u09aa\u099f\u09be\u09b6\u0997\u09c7\u09dc\u09b0 \u099c\u0999\u09cd\u0997,\u09a8\u09bf \u09a4 \u09a8\u09b0\u09c7\u0995, \u09bf\u09b9\u09ae\u09c1 ) written by three famous Bengali novelists, 'Shirshendu Mukhopadhyay', 'Bibhutibhushan Bandyopadhyay', and 'Humayun Ahmed'. Unlike English, the electronic versions (i.e., eBooks) of Bengali books are hardly available as eBooks are not popular among Bengali readers. Moreover, we notice that most of the eBooks available in PDF format were created by scanning images of the print versions; therefore, they are not suitable for text extraction. We find a website that provides a set of Bengali fiction in EPUB format. From there, we manually download the abovestated six Bengali novels and extract the text for annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Annotation and Collection", "sec_num": "2" }, { "text": "Three native Bengali speakers with university-level education perform the annotation. Annotating the mode of discourse in a piece of text (i.e., sentence) is often challenging since a sentence may have multiple modes, or the distinction is often not obvious. Thus annotators are provided a set of online resources and guidelines from a number of publications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Annotation and Collection", "sec_num": "2" }, { "text": "The discourse modes are selected based on the existing works of Song et al. (2017) and Dhanwal et al. (2020) . Song et al. (2017) categorized modes of discourse into five categories, narration, exposition, description, argument and emotion in narrative essays, while Dhanwal et al. (2020) annotated discourse modes into argumentative, narrative, descriptive, dialogic and informative categories. As our annotated content (i.e., excerpted sentences of Bengali novels) are more similar to the content (i.e., short stories) of Dhanwal et al. (2020) , our annotated discourse modes are more similar to their annotation. However, we notice that the presence of the argumentative mode in a fictional novel is rare as instead of establishing any opinion, a novel tells a story in chronological order. Besides, it is observed that the dialogic category itself does not comprise any new mode. Instead, it echoes the narrative or descriptive or other modes from a third-person point of view; thus, we do not include it as a separate mode.", "cite_spans": [ { "start": 64, "end": 82, "text": "Song et al. (2017)", "ref_id": "BIBREF22" }, { "start": 87, "end": 108, "text": "Dhanwal et al. (2020)", "ref_id": "BIBREF7" }, { "start": 111, "end": 129, "text": "Song et al. (2017)", "ref_id": "BIBREF22" }, { "start": 524, "end": 545, "text": "Dhanwal et al. (2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Data Annotation and Collection", "sec_num": "2" }, { "text": "In this study, the following three discourse modes are considered for annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "2.1" }, { "text": "Narrative: Narrative sentences relate to entities performing particular actions, often in chronological order as a part of storytelling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "2.1" }, { "text": "Bengali: \u09b8\u09ac\u09b0\u09cd \u099c\u09df\u09be \u09c7\u099b\u09c7\u09b2\u09b0 \u0995\u09be \u09c7\u09a6\u09bf\u0996\u09df\u09be \u0985\u09ac\u09be\u0995 \u09b9\u0987\u09df\u09be \u09b0\u09bf\u09b9\u09b2 English Translation: \"Sarvajaya was surprised to see the boy's actions\" Descriptive: Descriptive statements illustrate specific entities with some kind of description so that reader can imagine this in his mind. It enables readers to visualize characters, settings, and actions. For example, it tells how entities look, sound, feel, taste, and smell.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "2.1" }, { "text": "Bengali: \u098f\u0995\u09ae\u09be\u09a5\u09be \u099d\u09be\u0981 \u0995\u09dc\u09be \u099d\u09be\u0981 \u0995\u09dc\u09be \u099a\u09c1 \u09b2, \u09ad\u09be\u09bf\u09b0 \u09b6\u09be ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse Modes", "sec_num": "2.1" }, { "text": "English Translation: \"She has curly hair, heavy, calm, beautiful eyes, and a sleek black complexion\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u09b8\u09c1 \u09b0 \u09c7\u099a\u09be\u0996\u09ae\u09c1 \u0996, \u0995\u09c1 \u099a\u0995\u09c1 \u09c7\u099a \u0995\u09be\u09c7\u09b2\u09be \u0997\u09be\u09c7\u09df\u09b0 \u09b0\u0982\u0964", "sec_num": null }, { "text": "Informative: Informative sentences provide information regarding entities or circumstances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u09b8\u09c1 \u09b0 \u09c7\u099a\u09be\u0996\u09ae\u09c1 \u0996, \u0995\u09c1 \u099a\u0995\u09c1 \u09c7\u099a \u0995\u09be\u09c7\u09b2\u09be \u0997\u09be\u09c7\u09df\u09b0 \u09b0\u0982\u0964", "sec_num": null }, { "text": "Bengali: \u098f\u099f\u09be \u09aa\u099f\u09be\u09b6\u0997\u09c7\u09dc\u09b0 \u098f\u0995 \u09b0\u09be\u099c\u09be \u09ac\u09be\u09bf\u09a8\u09c7\u09df\u09bf\u099b\u09b2\u0964 English Translation: It was made by a king of Potashgarh.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u09b8\u09c1 \u09b0 \u09c7\u099a\u09be\u0996\u09ae\u09c1 \u0996, \u0995\u09c1 \u099a\u0995\u09c1 \u09c7\u099a \u0995\u09be\u09c7\u09b2\u09be \u0997\u09be\u09c7\u09df\u09b0 \u09b0\u0982\u0964", "sec_num": null }, { "text": "The annotation guidelines consist of the formal and informal descriptions of three different types of discourse modes, examples of various modes with the explanation, and examples of co-occurrence of various modes with mode dominance. Although the annotation is performed at the sentence level, the annotators are instructed to consider the surrounding sentences to get a better idea about the context of the sentence for better annotation. In case of the presence of multiple modes in a sentence, the annotators are asked to determine the most dominant discourse mode based on the provided guidelines and their own judgment and label accordingly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Task", "sec_num": "2.2" }, { "text": "The final dataset consists of 3310 sentences annotated by the three annotators, where two annotators label all the sentences and the third annotator acts only if there is any disagreement between the first two annotators for any case. Note that to include varied types of events and description sentences are randomly selected from the various sections of the novels by annotators (around 50% by each of the annotators). We observe an annotator agreement of 0.78 based on a Cohen's kappa (Cohen, 1960) for the label assignment between the first two annotators. Table 1 depicts the distributions of various modes of discourse in the annotated dataset. As shown in Table 1 , the annotated dataset is class imbalanced. We notice that the most dominant mode in the novels is narrative since the progression of a novel involves a lot of narrative events. Overall, almost 70% of the sentences in the annotated corpus represent nar-rative mode. The descriptive mode has 782 instances, while the informative mode is less prevalent and has only 246 samples.", "cite_spans": [ { "start": 488, "end": 501, "text": "(Cohen, 1960)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 561, "end": 568, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 663, "end": 670, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Annotation and Dataset Statistics", "sec_num": "2.3" }, { "text": "We observe that the most frequently cooccurring modes are narrative and descriptive, as often chronological events are described with some details. We find that over 20% of narrative sentences convey description to some extent. This observation is consistent with the findings of Song et al. (2017) . In the presence of multiple discourse modes within the same sentence, it is often challenging to identify the dominant one.", "cite_spans": [ { "start": 280, "end": 298, "text": "Song et al. (2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation and Dataset Statistics", "sec_num": "2.3" }, { "text": "As seen by Table 1 , the average sentence lengths of different discourse modes vary to some extent. For example, the lengths of the sentences representing the descriptive mode are much higher than the other two modes. A higher length of descriptive sentences is expected since they elucidate particular entities or events with some details.", "cite_spans": [], "ref_spans": [ { "start": 11, "end": 18, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Annotation and Dataset Statistics", "sec_num": "2.3" }, { "text": "We employ four classical supervised ML classifiers: logistic regression (LR), support vector machine (SVM), random forest (RF), and extra trees (ET) for determining the modes of the discourse of sentences. For SVM, we apply all three types of kernels, linear, polynomial, and Gaussian radial basis function (RBF). We find the linear kernel performs best for our classification problem (reported results). The word n-gram features are utilized as input for the CML classifiers. An n-gram is a contiguous sequence of n items from a piece of text. We extract the word-level unigrams and bigrams from the text, compute corresponding tf-idf scores, and then feed those values to the CML classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classical ML Classifier", "sec_num": "3.1" }, { "text": "For the CML classifiers, the default parameter settings of the scikit-learn (Pedregosa et al., 2011) library are used. A class-balanced weight is set for all CML classifiers.", "cite_spans": [ { "start": 76, "end": 100, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Classical ML Classifier", "sec_num": "3.1" }, { "text": "The transformer-based pre-trained contextual embedding such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have achieved state-of-the-art results in various text classi-fication tasks with limited labeled data. As these language models have been trained with a large amount of unlabelled data, they possess contextual knowledge; thus, fine-tuning them utilizing a small amount of problemspecific labeled data can attain satisfactory results.", "cite_spans": [ { "start": 68, "end": 89, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 102, "end": 120, "text": "(Liu et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Based Classifier", "sec_num": "3.2" }, { "text": "BERT utilizes the transformer architecture to learn contextual relationships between words (or sub-words) in a piece of text. Before feeding text sequences into BERT, 15% of the words in each sequence are replaced with a [MASK] token. The BERT model then tries to infer the original value of the masked words utilizing the contextual meaning provided by the surrounding non-masked words present in the sequence.", "cite_spans": [ { "start": 221, "end": 227, "text": "[MASK]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Based Classifier", "sec_num": "3.2" }, { "text": "The multilingual BERT (M-BERT) (Devlin et al., 2019) is the multilingual version of BERT, which was pre-trained with the Wikipedia content of 104 languages (Bengali is one of them). It consists of twelve-layer transformer blocks where each block contains twelve head self-attention layers and 768 hidden layers that result in approximately 110 million parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deep Learning Based Classifier", "sec_num": "3.2" }, { "text": "We fine-tune M-BERT for categorizing sentences into the three classes, narrative, descriptive, informative. Since this is a classification task, we utilize the classification module of the M-BERT. The hugging face library (Wolf et al., 2019 ) is used to fine-tune M-BERT.", "cite_spans": [ { "start": 222, "end": 240, "text": "(Wolf et al., 2019", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Fine Tuning", "sec_num": "3.2.1" }, { "text": "Since the initial layers of M-BERT only learn very general features, we keep them untouched. Only the last layer of the M-BERT is fine-tuned for our binary-level classification task. We only add one layer on top of the M-BERT for classification that acts as a classifier. We tokenize and feed our input training data to fine-tune M-BERT model; Afterward, the fine-tuned model is used for classifying the testing data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine Tuning", "sec_num": "3.2.1" }, { "text": "A mini-batch size of 8 and a learning rate of 4*10 -5 are used. The validation and training split ratio is set to 80% and 20%. The model is optimized using the Adam optimizer (Kingma and Ba, 2014), and the loss parameter is set to sparse-categorical-cross-entropy. The model is trained for 3 epochs with early ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine Tuning", "sec_num": "3.2.1" }, { "text": "To evaluate the performances of various approaches, 5-fold cross-validation is applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Settings", "sec_num": "3.3" }, { "text": "The 5-fold cross-validation split the dataset into 5-mutually independent subsets. It consists of 5 iterations; in each iteration, one of the new subsets is used as a testing set, and the other two subsets are used as a training set. The F1 score and accuracy of all three classes are reported separately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Settings", "sec_num": "3.3" }, { "text": "The F1 score of each class is computed based on its precision and recall scores. Let c represents a particular class and c \u2032 refer to all other classes. The TP, FP, and FN for the class c are defined as follows-TP = both true label and prediction refer a sentence to class c FP = true label of a sentence is class c \u2032 , while prediction says it is class c FN = true label marks a sentence as class c, while prediction refers to it class c \u2032 Table 2 provides the F1 scores and accuracy of various CML-based classifiers and transformers-based M-BERT model for discourse mode identification.", "cite_spans": [], "ref_spans": [ { "start": 441, "end": 448, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation Settings", "sec_num": "3.3" }, { "text": "The results reveal that all the four CML classifiers, LR, SVM, RF, and ET, yield high performance for the narrative class prediction; they achieve F1 scores between 0.84-0.89 and an accuracy of around 97%. For the descriptive class prediction, LR and SVM perform better than the RF and ET; they obtain f1 scores over 0.60 compared to 0.4 scores of decision tree-based classifiers. However, we observe that for informative class prediction all the classifiers perform poorly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "We observe that the performances of CML classifiers are affected by the class distribution of the dataset. Since the narrative class contains close to 70% of the instances in the dataset, the classifiers are biased towards it (Table 3) . All the CML classifiers fail to provide an acceptable level of performance for the minor informative class even after using classbalanced weights. We also employ SMOTE (Chawla et al., 2002) oversampling techniques for class balancing; however, we do not notice any noticeable performance improvement using SMOTE.", "cite_spans": [ { "start": 406, "end": 427, "text": "(Chawla et al., 2002)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 226, "end": 235, "text": "(Table 3)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The transformer-based multilingual language model yield slightly better performance than the CML classifiers. For the dominant narrative class, it attains an f1 score of 0.912. For other classes, it obtains similar f1 scores of the LR and SVM, around 0.67 and 0.05, respectively. It is noticed that all the classifiers perform poorly for the minor informative class prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The results suggest that the transformerbased multilingual BERT model can be effective for discourse mode classification in Bengali text. Although we do not notice signif-icant improvement compared to CML classifiers in this study, it could be attributed to limited labeled data. With more labeled data incorporated, the improvement could be higher ( transformer-based models have shown state-of-the-art performances for various NLP tasks across languages). Low resource language such as Bengali suffers from data annotation issues, as there are not enough resources to create a large labeled dataset. Thus, incorporating a pre-trained model can help address the scarcity of annotated data in the Bengali language to some extent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "In this study, we introduce a corpus consisting of sentences level annotation of various modes of discourse. The corpus consists of excerpted text from Bengali novels annotated with three different discourse modes: narrative, descriptive and informative. We provide details of the annotation procedure, such as annotation guidelines and annotator agreements, and investigate the characteristics of various discourse modes. Finally, we employ CML and deep learning-based classification approaches for automatic discourse mode identification. We observe that transformer-based fine-tuned language models yield the best performance. Our future work will expand the size of the corpus and demonstrate the usefulness of discourse mode annotated data for downstream tasks such as automated essay scoring and sentiment analysis in the low-resource Bengali language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "5" }, { "text": "https://www.babbel.com/en/magazine/ the-10-most-spoken-languages-in-the-world", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/sazzadcsedu/ DiscourseBangla.git", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Context in communication: analysis of bengali spoken discourse. Rajoshree Chatterjee and Jayshree Chakraborty", "authors": [ { "first": "Sanjoy", "middle": [], "last": "Banerjee", "suffix": "" } ], "year": 2010, "venue": "Rupkatha Journal on Interdisciplinary Studies in Humanities", "volume": "", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjoy Banerjee. 2010. Context in communi- cation: analysis of bengali spoken discourse. Rajoshree Chatterjee and Jayshree Chakraborty. 2019. Analyzing discourse coherence in bengali elementary choras (children's nursery rhymes). Rupkatha Journal on Interdisciplinary Studies in Humanities, 11(3).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Smote: synthetic minority oversampling technique", "authors": [ { "first": "V", "middle": [], "last": "Nitesh", "suffix": "" }, { "first": "Kevin", "middle": [ "W" ], "last": "Chawla", "suffix": "" }, { "first": "Lawrence", "middle": [ "O" ], "last": "Bowyer", "suffix": "" }, { "first": "W Philip", "middle": [], "last": "Hall", "suffix": "" }, { "first": "", "middle": [], "last": "Kegelmeyer", "suffix": "" } ], "year": 2002, "venue": "Journal of artificial intelligence research", "volume": "16", "issue": "", "pages": "321--357", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over- sampling technique. Journal of artificial intelligence research, 16:321-357.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A coefficient of agreement for nominal scales. Educational and psychological measurement", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "", "volume": "20", "issue": "", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psycho- logical measurement, 20(1):37-46.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The rise and fall of the modes of discourse. College Composition and Communication", "authors": [ { "first": "J", "middle": [], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Connors", "suffix": "" } ], "year": 1981, "venue": "", "volume": "32", "issue": "", "pages": "444--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert J Connors. 1981. The rise and fall of the modes of discourse. College Composition and Communication, 32(4):444-455.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Developing the bangla rst discourse treebank", "authors": [ { "first": "Debopam", "middle": [], "last": "Das", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debopam Das and Manfred Stede. 2018. De- veloping the bangla rst discourse treebank. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Dimlex-bangla: A lexicon of bangla discourse connectives", "authors": [ { "first": "Debopam", "middle": [], "last": "Das", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "1097--1102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debopam Das, Manfred Stede, Soumya Sankar Ghosh, and Lahari Chatterjee. 2020. Dimlex-bangla: A lexicon of bangla discourse connectives. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 1097-1102.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pretraining of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre- training of deep bidirectional transformers for language understanding.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An annotated dataset of discourse modes in hindi stories", "authors": [ { "first": "Swapnil", "middle": [], "last": "Dhanwal", "suffix": "" }, { "first": "Hritwik", "middle": [], "last": "Dutta", "suffix": "" }, { "first": "Hitesh", "middle": [], "last": "Nankani", "suffix": "" }, { "first": "Nilay", "middle": [], "last": "Shrivastava", "suffix": "" }, { "first": "Yaman", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Junyi", "middle": [ "Jessy" ], "last": "Li", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Mahata", "suffix": "" }, { "first": "Rakesh", "middle": [], "last": "Gosangi", "suffix": "" }, { "first": "Haimin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rajiv", "middle": [], "last": "Shah", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "1191--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swapnil Dhanwal, Hritwik Dutta, Hitesh Nankani, Nilay Shrivastava, Yaman Kumar, Junyi Jessy Li, Debanjan Mahata, Rakesh Gosangi, Haimin Zhang, Rajiv Shah, et al. 2020. An annotated dataset of discourse modes in hindi stories. In Proceedings of the 12th Language Resources and Evalua- tion Conference, pages 1191-1196.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Discourse-oriented anaphora resolution in natural language understanding: A review", "authors": [ { "first": "Graerne", "middle": [], "last": "Hirst", "suffix": "" } ], "year": 1981, "venue": "American journal of computational linguistics", "volume": "7", "issue": "2", "pages": "85--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graerne Hirst. 1981. Discourse-oriented anaphora resolution in natural language un- derstanding: A review. American journal of computational linguistics, 7(2):85-98.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimiza- tion. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Assessing the discourse factors that influence the quality of machine translation", "authors": [ { "first": "Jessy", "middle": [], "last": "Junyi", "suffix": "" }, { "first": "Marine", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "283--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junyi Jessy Li, Marine Carpuat, and Ani Nenkova. 2014. Assessing the discourse fac- tors that influence the quality of machine translation. In Proceedings of the 52nd An- nual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Pa- pers), pages 283-288.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The role of discourse units in near-extractive summarization", "authors": [ { "first": "Jessy", "middle": [], "last": "Junyi", "suffix": "" }, { "first": "Kapil", "middle": [], "last": "Li", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Thadani", "suffix": "" }, { "first": "", "middle": [], "last": "Stent", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "137--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junyi Jessy Li, Kapil Thadani, and Amanda Stent. 2016. The role of discourse units in near-extractive summarization. In Proceed- ings of the 17th Annual Meeting of the Spe- cial Interest Group on Discourse and Dia- logue, pages 137-147.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly op- timized bert pretraining approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blon- del, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duches- nay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The complete idiot's guide to grammar and style", "authors": [ { "first": "Laurie", "middle": [], "last": "Rozakis", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurie Rozakis. 2003. The complete idiot's guide to grammar and style. Penguin.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Identification of rhetorical structure relation from discourse marker in bengali language understanding", "authors": [ { "first": "Abhishek", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "Pinaki", "middle": [], "last": "Sankar Chatterjee", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Sarkar and Pinaki Sankar Chatter- jee. 2013. Identification of rhetorical struc- ture relation from discourse marker in ben- gali language understanding.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Cross-lingual sentiment classification in low-resource bengali language", "authors": [ { "first": "Salim", "middle": [], "last": "Sazzed", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the sixth workshop on noisy user-generated text (W-NUT 2020)", "volume": "", "issue": "", "pages": "50--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salim Sazzed. 2020. Cross-lingual sentiment classification in low-resource bengali lan- guage. In Proceedings of the sixth work- shop on noisy user-generated text (W-NUT 2020), pages 50-60.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Abusive content detection in transliterated bengali-english social media corpus", "authors": [ { "first": "Salim", "middle": [], "last": "Sazzed", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching", "volume": "", "issue": "", "pages": "125--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salim Sazzed. 2021a. Abusive content detec- tion in transliterated bengali-english social media corpus. In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching, pages 125-130.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Identifying vulgarity in bengali social media textual content", "authors": [ { "first": "Salim", "middle": [], "last": "Sazzed", "suffix": "" } ], "year": 2021, "venue": "PeerJ Computer Science", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salim Sazzed. 2021b. Identifying vulgarity in bengali social media textual content. PeerJ Computer Science, 7:e665.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A lexicon for profane and obscene text identification in bengali", "authors": [ { "first": "Salim", "middle": [], "last": "Sazzed", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "1289--1296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salim Sazzed. 2021c. A lexicon for profane and obscene text identification in bengali. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 1289- 1296.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A sentiment classification in bengali and machine translated english corpus", "authors": [ { "first": "Salim", "middle": [], "last": "Sazzed", "suffix": "" }, { "first": "Sampath", "middle": [], "last": "Jayarathna", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE 20th international conference on information reuse and integration for data science (IRI)", "volume": "", "issue": "", "pages": "107--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salim Sazzed and Sampath Jayarathna. 2019. A sentiment classification in bengali and machine translated english corpus. In 2019 IEEE 20th international conference on in- formation reuse and integration for data sci- ence (IRI), pages 107-114. IEEE.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Modes of discourse: The local structure of texts", "authors": [ { "first": "S", "middle": [], "last": "Carlota", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "", "volume": "103", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlota S Smith. 2003. Modes of discourse: The local structure of texts, volume 103. Cambridge University Press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Discourse mode identification in essays", "authors": [ { "first": "Wei", "middle": [], "last": "Song", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ruiji", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Lizhen", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "112--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Song, Dong Wang, Ruiji Fu, Lizhen Liu, Ting Liu, and Guoping Hu. 2017. Discourse mode identification in essays. In Proceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 112-122.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Evaluating discourse-based answer extraction for whyquestion answering", "authors": [ { "first": "Suzan", "middle": [], "last": "Verberne", "suffix": "" }, { "first": "Lou", "middle": [], "last": "Boves", "suffix": "" }, { "first": "Nelleke", "middle": [], "last": "Oostdijk", "suffix": "" }, { "first": "Peter-Arno", "middle": [], "last": "Coppen", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "735--736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suzan Verberne, Lou Boves, Nelleke Oostdijk, and Peter-Arno Coppen. 2007. Evaluating discourse-based answer extraction for why- question answering. In Proceedings of the 30th annual international ACM SIGIR con- ference on Research and development in in- formation retrieval, pages 735-736.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Discourse structure and language technology", "authors": [ { "first": "Bonnie", "middle": [], "last": "Webber", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Egg", "suffix": "" }, { "first": "Valia", "middle": [], "last": "Kordoni", "suffix": "" } ], "year": 2012, "venue": "Natural Language Engineering", "volume": "18", "issue": "4", "pages": "437--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie Webber, Markus Egg, and Valia Ko- rdoni. 2012. Discourse structure and lan- guage technology. Natural Language Engi- neering, 18(4):437-490.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Huggingface's transformers: State-of-theart natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.03771" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, An- thony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of-the- art natural language processing. arXiv preprint arXiv:1910.03771.", "links": null } }, "ref_entries": { "TABREF1": { "type_str": "table", "text": "Statistics of various discourse modes in the annotated corpus", "num": null, "html": null, "content": "
Classifier #Sentence #Words/
Sentence
Narrative228214.62
Descriptive78223.43
Informative24611.73
" }, "TABREF2": { "type_str": "table", "text": "Performance of various approaches for discourse mode prediction", "num": null, "html": null, "content": "
Type ClassifierNarrativeDescriptiveInformative
F1/Acc.F1/Acc.F1/Acc.
LR0.8857/0.9708 0.6796 /0.5896 0.064/0.0333
SVM0.8739/0.9787 0.6126/0.4909 0.0328/0.0167
CMLRF0.8433/0.9911 0.3773/0.2416 0.0165/0.0083
ET0.8458/0.99380.4/0.25710.0328/0.0167
DLMultilingual0.912/0.9570.66/0.68750.0468/0.024
BERT
" }, "TABREF3": { "type_str": "table", "text": "An example of the confusion matrix yielded by the LR classifier", "num": null, "html": null, "content": "
ClassNarrative Descriptive Informative
Narrative2213690
Descriptive3374387
Informative1845012
stopping enabled.
" } } } }