{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T11:58:00.032435Z" }, "title": "Emotion Classification in German Plays with Transformer-based Language Models Pretrained on Historical and Contemporary Language", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Regensburg", "location": { "country": "Germany" } }, "email": "thomas.schmidt@ur.de" }, { "first": "Katrin", "middle": [], "last": "Dennerlein", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of W\u00fcrzburg", "location": { "country": "Germany" } }, "email": "katrin.dennerlein@uni-wuerzburg.de" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Regensburg", "location": { "country": "Germany" } }, "email": "christian.wolff@ur.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present results of a project on emotion classification on historical German plays of Enlightenment, Storm and Stress, and German Classicism. We have developed a hierarchical annotation scheme consisting of 13 subemotions like suffering, love and joy that sum up to 6 main and 2 polarity classes (positive/negative). We have conducted textual annotations on 11 German plays and have acquired over 13,000 emotion annotations by two annotators per play. We have evaluated multiple traditional machine learning approaches as well as transformer-based models pretrained on historical and contemporary language for a single-label text sequence emotion classification for the different emotion categories. The evaluation is carried out on three different instances of the corpus: (1) taking all annotations, (2) filtering overlapping annotations by annotators, (3) applying a heuristic for speech-based analysis. Best results are achieved on the filtered corpus with the best models being large transformer-based models pretrained on contemporary German language. For the polarity classification accuracies of up to 90% are achieved. The accuracies become lower for settings with a higher number of classes, achieving 66% for 13 sub-emotions. Further pretraining of a historical model with a corpus of dramatic texts led to no improvements.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present results of a project on emotion classification on historical German plays of Enlightenment, Storm and Stress, and German Classicism. We have developed a hierarchical annotation scheme consisting of 13 subemotions like suffering, love and joy that sum up to 6 main and 2 polarity classes (positive/negative). We have conducted textual annotations on 11 German plays and have acquired over 13,000 emotion annotations by two annotators per play. We have evaluated multiple traditional machine learning approaches as well as transformer-based models pretrained on historical and contemporary language for a single-label text sequence emotion classification for the different emotion categories. The evaluation is carried out on three different instances of the corpus: (1) taking all annotations, (2) filtering overlapping annotations by annotators, (3) applying a heuristic for speech-based analysis. Best results are achieved on the filtered corpus with the best models being large transformer-based models pretrained on contemporary German language. For the polarity classification accuracies of up to 90% are achieved. The accuracies become lower for settings with a higher number of classes, achieving 66% for 13 sub-emotions. Further pretraining of a historical model with a corpus of dramatic texts led to no improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transformer-based language models like BERT (Devlin et al., 2019) and ELECTRA (Clark et al., 2019) have recently gained a lot of attention and achieve state-of-the-art results for various tasks in natural language processing (NLP) (Qiu et al., 2020) . These language models are usually trained via deep learning on large amounts of texts acquired from the internet. Unlike previous methods in NLP, these models use context-sensitive word representations and they can better deal with out-ofvocabulary words. These attributes are, of course, advantageous for various text sorts in digital humanities (DH) and computational literary studies (CLS). Furthermore, transformer-based language models can be adapted to specific domain texts by either training a model from scratch on large amounts of these texts or taking an existing model and further pretraining it with domain-specific texts (Beltagy et al., 2019; Gururangan et al., 2020; Rietzler et al., 2020) . Transformer-based models as well as these approaches have been successfully applied in DH contexts with historical or poetic German texts for named entity recognition (NER) (Schweter and Baiter, 2019; Labusch et al., 2019) and speech type recognition (Brunner et al., 2020) . We present a study for the task of textual emotion classification in the same line of research for the use case of German historical plays.", "cite_spans": [ { "start": 44, "end": 65, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 78, "end": 98, "text": "(Clark et al., 2019)", "ref_id": "BIBREF6" }, { "start": 231, "end": 249, "text": "(Qiu et al., 2020)", "ref_id": "BIBREF23" }, { "start": 887, "end": 909, "text": "(Beltagy et al., 2019;", "ref_id": "BIBREF1" }, { "start": 910, "end": 934, "text": "Gururangan et al., 2020;", "ref_id": null }, { "start": 935, "end": 957, "text": "Rietzler et al., 2020)", "ref_id": "BIBREF26" }, { "start": 1133, "end": 1160, "text": "(Schweter and Baiter, 2019;", "ref_id": "BIBREF40" }, { "start": 1161, "end": 1182, "text": "Labusch et al., 2019)", "ref_id": "BIBREF17" }, { "start": 1211, "end": 1233, "text": "(Brunner et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Emotion classification deals with the prediction of (multiple) emotion categories in text. Its neighbouring field sentiment analysis primarily focuses on the prediction of the overall polarity (or valence) of a text, meaning if it is rather positive or negative (M\u00e4ntyl\u00e4 et al., 2018) . Both methods have been explored in DH and CLS to analyze emotion/sentiment distributions and progressions in social media (Schmidt et al., 2020b) or literary texts like plays (Nalisnick and Baird, 2013; Schmidt et al., 2019b; Schmidt, 2019) , novels (Zehe et al., 2016; Reagan et al., 2016) and fairy tales (Alm and Sproat, 2005; Mo-hammad, 2011 ) (see Kim and Klinger (2019) for an in-depth review of this research area). However, as the review of Kim and Klinger (2019) and recent tool developments in DH show (Schmidt et al., 2021a) , the application of rather basic lexiconbased methods is frequent although these methods are usually outperformed by more modern approaches in sentiment and emotion classification (Cao et al., 2020; Dang et al., 2020; Cortiz, 2021; Gonz\u00e1lez-Carvajal and Garrido-Merch\u00e1n, 2021) and are especially problematic for literary texts (Fehle et al., 2021) . Furthermore, performance evaluation of computational approaches compared to human annotations (\"gold standard\") are rare. Thus, we present an evaluation study for the use case of German historical plays (from Enlightenment, Storm and Stress and German Classicism) for emotion classification. Our goal is to develop emotion classification algorithms with a satisfactory performance for the described use case to investigate in later stages of our research, for example, emotion progressions throughout time or genrebased differences concerning emotion distributions on a larger set of plays. We primarily focus on current state-of-the-art transformer-based language models.", "cite_spans": [ { "start": 262, "end": 284, "text": "(M\u00e4ntyl\u00e4 et al., 2018)", "ref_id": "BIBREF20" }, { "start": 409, "end": 432, "text": "(Schmidt et al., 2020b)", "ref_id": "BIBREF36" }, { "start": 462, "end": 489, "text": "(Nalisnick and Baird, 2013;", "ref_id": "BIBREF21" }, { "start": 490, "end": 512, "text": "Schmidt et al., 2019b;", "ref_id": "BIBREF31" }, { "start": 513, "end": 527, "text": "Schmidt, 2019)", "ref_id": "BIBREF27" }, { "start": 537, "end": 556, "text": "(Zehe et al., 2016;", "ref_id": "BIBREF47" }, { "start": 557, "end": 577, "text": "Reagan et al., 2016)", "ref_id": "BIBREF24" }, { "start": 594, "end": 616, "text": "(Alm and Sproat, 2005;", "ref_id": "BIBREF0" }, { "start": 617, "end": 632, "text": "Mo-hammad, 2011", "ref_id": null }, { "start": 640, "end": 662, "text": "Kim and Klinger (2019)", "ref_id": "BIBREF16" }, { "start": 736, "end": 758, "text": "Kim and Klinger (2019)", "ref_id": "BIBREF16" }, { "start": 799, "end": 822, "text": "(Schmidt et al., 2021a)", "ref_id": "BIBREF32" }, { "start": 1004, "end": 1022, "text": "(Cao et al., 2020;", "ref_id": "BIBREF4" }, { "start": 1023, "end": 1041, "text": "Dang et al., 2020;", "ref_id": "BIBREF8" }, { "start": 1042, "end": 1055, "text": "Cortiz, 2021;", "ref_id": null }, { "start": 1056, "end": 1100, "text": "Gonz\u00e1lez-Carvajal and Garrido-Merch\u00e1n, 2021)", "ref_id": "BIBREF13" }, { "start": 1151, "end": 1171, "text": "(Fehle et al., 2021)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this paper are as follows: (1) the development of an emotion annotation scheme directed towards the interest of literary scholars for the time frame of our corpus, (2) the annotation results for the annotation of 11 plays by 2 annotators for each play, (3) a systematic evaluation of traditional textual machine learning (ML)-approaches, transformer-based models pretrained on contemporary and historical language and further pretrained on dramatic texts on different instances of the annotated corpus. The goal of this contribution is to work towards the development of emotion classification algorithms with a satisfactory performance for the described use case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the following section, we describe the conceptual framework and process for the acquisition of the annotated corpus that serves as training and evaluation corpus for the emotion classification (\"Gold Standard\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and Evaluation Corpus", "sec_num": "2" }, { "text": "The main goal of the scheme development was to create an annotation scheme that includes the interests of literary scholars and the interpretative and historical dimensions of these literary texts. Common emotion annotation schemes in NLP are mostly inspired by psychology, oftentimes consisting of 6-8 established emotion classes (cf. Wood et al., 2018a,b) . However, we regard these concept sets as unfit for our specific case, since important emotion and affect concepts from the perspective of literary criticism for the time of our plays are missing, while other concepts are not specifically important for our text genre. Thus, we developed a novel annotation scheme based on literary theory and redesigned the scheme in an iterative process of small pilot annotations and discussions. Our final scheme deviates heavily from more common schemes in emotion annotation in NLP. Some concepts wellknown in NLP and psychology are included like joy, fear or anger while other standard emotion concepts like disgust and surprise showed in pilot annotations to be not of great importance. Concepts, important for literary critique for that time that are not usually regarded as emotions, that we include are desire, suffering or compassion. Please refer to Schmidt et al., (2021b) for more information about the scheme creation and the annotation process.", "cite_spans": [ { "start": 336, "end": 357, "text": "Wood et al., 2018a,b)", "ref_id": null }, { "start": 1255, "end": 1278, "text": "Schmidt et al., (2021b)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "The final scheme consists of 13 sub-emotions that are hierarchically clustered including one special concept (emotional movement). The subemotions are summarized in six main classes which can then be clustered in a final binary setting of two classes (similar to sentiment): (per default) positive and negative emotions (marked in the upcoming list as + and -respectively; we refer to this concept as polarity). In the following list we name the sub-emotions and main classes with the original German term in brackets (since we perform annotations in German) and an English translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "\u2022 emotions of affection (Emotionen der Zuneigung)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "-desire (Lust) (+) -love (Liebe) (+) -friendship (Freundschaft) (+) -admiration (Verehrung) (+)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "\u2022 emotions of joy (Emotionen der Freude)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "joy (Freude) (+)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "-Schadenfreude (The joy about the misfortune of others) (+)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "\u2022 emotions of fear (Emotionen der Angst)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "-fear (Angst) (-) -despair (Verzweiflung) (-)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "\u2022 emotions of rejection (Emotionen der Ablehnung)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "anger (\u00c4rger) (-) hate, disgust (Hass, Abscheu) (-)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "\u2022 emotions of suffering (Emotionen des Leids)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "-suffering (Leid) (-) -compassion (Mitleid) (-) \u2022 emotional movement", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "Emotional movement has no polarity and is used to describe astonishment, emotional turmoil, excitation and oscillation between several emotions. We will refer to the combination of the positiveand the negative-class as well as emotional movement as triple polarity. The various hierarchical structures are later used for classification approaches with different class numbers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Scheme", "sec_num": "2.1" }, { "text": "Annotators are instructed to assign sub-emotions, as defined in our scheme, to text. We regard the character's state of mind as expressed in the text as the emotion to be annotated. Annotations are performed context-sensitive, meaning annotators should take into account the plot and content of the entire play and annotate what the character really means as determined by the literary interpretation. Thus, plays are read and annotated from beginning to end concerning stage directions and speeches (single utterances of characters separated by the utterances of other characters). Depending on the emotional expression in the text, annotators can mark text sequences of varied lengths (ranging from one word to an entire speech) and are not limited to a concrete annotation size. Furthermore, annotators can annotate multiple annotations per text sequence fully or partially (see figure 1 ) and adjust the default polarity of sub-emotions for certain cases. The annotation procedure just presented is closer to the interpretation process of literary scholars than context-free approaches with fixed text sizes for annotation attribution that are more common in NLP (M\u00e4ntyl\u00e4 et al., 2018) . It has been deemed as more fitting throughout multiple pilot annotations with literary scholars. The annotation process itself is performed with the tool CATMA 1 (Gius et al., 2020) . Two annotators annotate each play independently from each other in a time span of 1-2 weeks depending on the length of the play. All annotators are students of German literary studies and are compensated monetarily for the annotation. They have access to an annotation instruction manual with descriptions of the scheme and examples. They also participated in test annotations under the guidance of an expert literary scholar. Indeed, the entire annotation process is iterative (cf. Reiter, 2020) meaning scheme and instructions changed based on feedback throughout the project cycle and might be due to change (the study presented here has been performed consistently in the way described, however).", "cite_spans": [ { "start": 1167, "end": 1189, "text": "(M\u00e4ntyl\u00e4 et al., 2018)", "ref_id": "BIBREF20" }, { "start": 1354, "end": 1373, "text": "(Gius et al., 2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 882, "end": 890, "text": "figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Annotation Process", "sec_num": "2.2" }, { "text": "As part of our larger project, we intend to analyze emotion classification on historical German plays between 1650-1815. Our current corpus of plays consists of around 300 digitized plays. For this evaluation study, we annotated a representative sub-corpus of 11 plays of varying epochs and genres. However, we focus on more recent plays for this first evaluation study since older ones are more likely to pose more challenges to the applied language models: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotated Plays", "sec_num": "2.3" }, { "text": "Depending on the length of a play, the annotation duration for each play was 8-15 hours in absolute numbers. We collected 13,264 annotations of varying lengths. Table 1 illustrates the distributions for the sub-emotions as well as the resulting sums for the main classes.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Annotation Statistics", "sec_num": "2.4" }, { "text": "The most frequent sub-emotion are suffering (16%) and love (13%) and the emotions of rejection and (23%) affection (22%) for the main classes respectively. The overall distribution is rather imbalanced with certain sub-emotions being rarely annotated (e.g. desire). Considering the overall triple polarity, the majority of annotations are negative (53%), followed by positive (37%) and emotional movement (11%). We also examined token statistics about annotation lengths: On average an annotation consists of 25 tokens, however with a large variance ranging from 1-token annotations to multiple sentences consisting of over 500 tokens. This shows that annotators make significant use of the possibility of varied annotation lengths.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation Statistics", "sec_num": "2.4" }, { "text": "Due to the varied annotation lengths, calculating inter-annotator agreement is not possible with common metrics. However, to get an overall understanding of the agreement we calculate agreement according to the following speech-based heuristic: For each speech, the emotion that is annotated the most per speech (measured in number of tokens) is assigned the specific emotion (or a neutral class if no emotion is annotated) for each annotator. This results in a Cohen's \u03ba value of 0.5 for polarity (percentage wise agreement: 68%) and 0.4 for main classes (62%) and sub-emotions (58%) respectively. This is regarded as moderate agreement (Landis and Koch, 1977) , which is low compared with sentiment analysis research with other text sorts (cf. M\u00e4ntyl\u00e4 et al., 2018) but in line with similar annotation projects with literary and historical texts (Alm and Sproat, 2005; Sprugnoli et al., 2015; Schmidt et al., , 2019a \u00d6hman, 2020; Schmidt et al., 2020a) .", "cite_spans": [ { "start": 650, "end": 661, "text": "Koch, 1977)", "ref_id": "BIBREF18" }, { "start": 746, "end": 767, "text": "M\u00e4ntyl\u00e4 et al., 2018)", "ref_id": "BIBREF20" }, { "start": 848, "end": 870, "text": "(Alm and Sproat, 2005;", "ref_id": "BIBREF0" }, { "start": 871, "end": 894, "text": "Sprugnoli et al., 2015;", "ref_id": "BIBREF42" }, { "start": 895, "end": 918, "text": "Schmidt et al., , 2019a", "ref_id": "BIBREF30" }, { "start": 919, "end": 931, "text": "\u00d6hman, 2020;", "ref_id": "BIBREF48" }, { "start": 932, "end": 954, "text": "Schmidt et al., 2020a)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation Statistics", "sec_num": "2.4" }, { "text": "Due to the varied annotation text sequence lengths and the moderate agreement statistics, we evaluated and trained the chosen emotion classification approaches on different \"manifestations\" of our corpus. We refer to the first one as full corpus. This manifestation includes all text annotations of the two annotators for every play. Thus, it does include annotations for which the annotators disagree upon fully or partially. This is the largest corpus manifestation consisting of 13,264 annotations (for more statistics see table 1). For the classification of polarity, we reduce corpora by filtering out annotations with emotional movement, which results in 11,883 annotations for the full corpus. The second manifestation is referred to as filtered corpus. For this corpus instance, we filter out all annotations for which annotators either fully or partially disagree, meaning annotations of different categories that overlap at least for one token. We do not filter out annotated text sequences by one annotator that are not annotated by the other one. We do however filter all overlapping contrary annotations by a single annotator. While our annotation scheme enables these kind of annotations, we want to evaluate how the filtering of all contrary overlaps influences emotion classification. Depending of the emotion hierarchy, this results in different annotation numbers for the final filtered corpus: 9,962 (polarity), 10,247 (triple polarity), 8,552 (main class), 7,503 (sub-emotions). Thus, the filtering reduces the corpus size between 15-44% depending of the categorical system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Manifestations", "sec_num": "2.5" }, { "text": "The last manifestation, the speech corpus, is focused on the central units of plays: speeches and stage directions. It is designed as follows: Each speech (we include stage directions in the following when speaking about speeches) of the plays is assigned with the emotion category that is annotated the most by both annotators (as measured by number of tokens). If tied among multiple classes, the class is assigned that is overall chosen the least (to counteract class imbalances). The entire corpus consists of 11,617 speeches; we filter out speeches with no annotation by either annotator to avoid adding an extra neutral-like class to our already multi-class setting (adding neutrality is something we intend to explore in future work). This reduces the amount of speeches to 6,741 and affects especially stage directions which are rarely annotated. We apply the above heuristic to acquire emotion assignments. Please note that emotion distributions change compared to the other manifestations since underrepresented classes become even more rare due to the applied heuristic; thus the class imbalances intensify. Distribution statistics for the filtered and speech corpus can be found in the appendix (table 6, 7, 8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Manifestations", "sec_num": "2.5" }, { "text": "We separate the corpus in these three manifestations in order to explore performance on different classification levels and text sizes, which will influence our decision for later large-scale emotion prediction tasks on larger corpora of plays which we plan for future stages of our project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Manifestations", "sec_num": "2.5" }, { "text": "We regard the emotion classification as single-label classification on text sequences of varied lengths. The amount of classes differs depending on the hierarchical system: polarity (2 classes), triple valence (3 classes), main classes (6), sub-emotions (13). We have implemented reference baselines based on traditional ML-approaches but otherwise focus on transformer-based language models for German pretrained on contemporary and historical texts since transformer-based models have been shown to achieve state-of-the-art results for emotion classification (Shmueli and Ku, 2019; Yang et al., 2019; Cao et al., 2020) and performed best in a pre-study (Schmidt et al., 2021c) . We also explore further fine-tuning/pretraining of a pretrained model with our domain texts since research suggests performance improvements for this method (Beltagy et al., 2019; Gururangan et al., 2020; Rietzler et al., 2020) .", "cite_spans": [ { "start": 561, "end": 583, "text": "(Shmueli and Ku, 2019;", "ref_id": "BIBREF41" }, { "start": 584, "end": 602, "text": "Yang et al., 2019;", "ref_id": "BIBREF46" }, { "start": 603, "end": 620, "text": "Cao et al., 2020)", "ref_id": "BIBREF4" }, { "start": 655, "end": 678, "text": "(Schmidt et al., 2021c)", "ref_id": "BIBREF34" }, { "start": 838, "end": 860, "text": "(Beltagy et al., 2019;", "ref_id": "BIBREF1" }, { "start": 861, "end": 885, "text": "Gururangan et al., 2020;", "ref_id": null }, { "start": 886, "end": 908, "text": "Rietzler et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Emotion Classification Methods", "sec_num": "3" }, { "text": "The following \"classical\" ML-methods for text are implemented (methods like this are usually outperformed by transformer-based approaches in other settings (Gonz\u00e1lez-Carvajal and Garrido-Merch\u00e1n, 2021) and thus serve as lower baselines in the following evaluation): (1) Representation of text units with term frequencies in a bag-of-words model and subsequently Multinomial Naive Bayes as training algorithm. (2) Same representation format as above but Support Vector Machines as training algorithm.", "cite_spans": [ { "start": 156, "end": 201, "text": "(Gonz\u00e1lez-Carvajal and Garrido-Merch\u00e1n, 2021)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "3.1" }, { "text": "We implemented the approaches with the scikitlearn machine learning library 3 (Pedregosa et al., 2011) and trained and evaluated the algorithms in a stratified 5x5 cross evaluation setting. We refer to the first approach as bow-nb and the second one as bow-svm. We will also report the random and majority baseline for each classification task. Please note that depending on the corpus type, these values migh vary.", "cite_spans": [ { "start": 78, "end": 102, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Methods", "sec_num": "3.1" }, { "text": "We selected the (to our knowledge) most wellknown and established transformer-based language models in German that are freely available. Table 2 summarizes the selected models (the identifiers are used in the following to reference the models). All models are acquired via the Hugging Face platform 4 and are also implemented with the corresponding library (Wolf et al., 2020) .", "cite_spans": [ { "start": 358, "end": 377, "text": "(Wolf et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 137, "end": 145, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Transformer-based Models", "sec_num": "3.2" }, { "text": "One main point of interest are performance differences between models pretrained on contemporary texts (e.g. like the Wikipedia, subtitles etc.) for general purpose tasks and models pretrained on historical texts (e.g. historical newspapers, historical fictional texts). In Table 2 we attribute the label \"historical\" to a model if a significant part of the texts dates from before the 20 th century. We want to evaluate if these models perform better since the language is closer to the ones of our plays, which are of the 18 th and 19 th century.", "cite_spans": [], "ref_spans": [ { "start": 274, "end": 281, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Transformer-based Models", "sec_num": "3.2" }, { "text": "For the contemporary models, we evaluate, among others, the models gbert-large and gelectralarge by Deepset 5 which achieve state-of-the-art results in standardized NLP-tasks (Chan et al., 2020) and are, to our knowledge, the largest German BERT-and ELECTRA-based models. On the historical side, we evaluate two models provided by the European Digital Library Europeana pretrained on historical newspaper (Schweter and Baiter, 2019; Schweter, 2020) and a model focused on fictional texts (Brunner et al., 2020) . To perform the training and evaluation, each model is fine-tuned to the downstream task of emotion classification for the specific hierarchy and corpus. We apply the recommended settings for the training of downstream tasks, depending on the architecture: BERT (Devlin et al., 2019) or ELECTRA (Clark et al., 2019) as well as by the Hugging Face-library. Each model is fine-tuned for 4 epochs, a batch size of 32, learning rate of 4e-5 and Adam optimizer for stochastic gradient descent. The models are trained and evaluated in a 5x5 cross evaluation setting, thus averages over 5 runs are reported. As GPU a Tesla P100 was used.", "cite_spans": [ { "start": 175, "end": 194, "text": "(Chan et al., 2020)", "ref_id": "BIBREF5" }, { "start": 405, "end": 432, "text": "(Schweter and Baiter, 2019;", "ref_id": "BIBREF40" }, { "start": 433, "end": 448, "text": "Schweter, 2020)", "ref_id": "BIBREF39" }, { "start": 488, "end": 510, "text": "(Brunner et al., 2020)", "ref_id": "BIBREF2" }, { "start": 774, "end": 795, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF9" }, { "start": 807, "end": 827, "text": "(Clark et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Transformer-based Models", "sec_num": "3.2" }, { "text": "All of the above models are trained from scratch on large amounts of texts. However, recent research also suggests further pretraining of already existing models with texts that are close to the texts of the downstream task may improve results (domain-specific fine-tuning) (Gururangan et al., 2020; Rietzler et al., 2020) . We explore this approach and further pretrain the model bert-basegerman-europeana-cased solely with German dramatic texts that we acquired of our corpus sources (including the annotated texts). The texts consist of all German plays of GerDracor (Fischer et al., 2019) , the platform TextGrid 6 and around 60 plays we acquired via various other sources. Altogether the texts sum up to 300 MB consisting of 1,224 plays that range from the 16 th to the 20 th century. We use the simpletransformer-library 7 and further pretrain the model bert-base-german-europeanacased for 10 epochs. The setting and parameters for the emotion classification training are the same as for the general models. We refer to this model as bert-europeana-further-pretrained.", "cite_spans": [ { "start": 274, "end": 299, "text": "(Gururangan et al., 2020;", "ref_id": null }, { "start": 300, "end": 322, "text": "Rietzler et al., 2020)", "ref_id": "BIBREF26" }, { "start": 570, "end": 592, "text": "(Fischer et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Transformer-based Models", "sec_num": "3.2" }, { "text": "We report accuracies and F1-scores for all models and category systems as well as corpus manifestations in tables 3, 4 and 5. Considering F1-scores, we report weighted F1 due to the imbalanced class distributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "In general, transformer-based models outperform traditional ML-approaches. For every corpus manifestation the performance of the different transformer-based models is rather similar regardless whether contemporary or historical language is the basis for the pretraining. The best models are the large contemporary models gbert-large and gelectra-large achieving up to 90% for polarity (2 classes), 85% for triple polarity (3 classes), 75% for main classes (6 classes) and 66% for subemotions (13 classes) on the filtered corpus. The historical models perform rather similar but consistently slightly below the large contemporary ones, but also slightly above the smaller contemporary model bert-base-german-europeana-cased. Considering the different corpus manifestations, all models perform best on the filtered corpus and worst for the speech-based prediction. The difference becomes larger with increasing number of classes. For example, gbert-large achieves an accuracy of 75% for main class prediction on the filtered corpus which reduces to 51% on the speech corpus. As the analysis of recall and F1-macro statistics show, this is mostly due to the bad prediction accuracies for low-frequency classes. 8 Further pretraining the model bert-base-germaneuropeana-cased with dramatic texts did not result Deepset (Chan et al., 2020) gelectra-large gelectra-large contemporary Crawled web data, Wikipedia, subtitles, book, legal texts (\u223c161 GB)", "cite_spans": [ { "start": 1208, "end": 1209, "text": "8", "ref_id": null }, { "start": 1315, "end": 1334, "text": "(Chan et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Deepset (Chan et al., 2020) bert-europeana bert-base-germaneuropeana-cased historical Europeana newspaper (51 GB)", "cite_spans": [ { "start": 8, "end": 27, "text": "(Chan et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "MDZ Digital Library (Schweter, 2020) electra-europeana electra-basegerman-europeanacased-discriminator historical", "cite_spans": [ { "start": 20, "end": 36, "text": "(Schweter, 2020)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Europeana newspaper (51 GB)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "MDZ Digital Library (Schweter, 2020) bert-historical-rw bert-base-historicalgerman-rw-cased historical Fairy tales, historical newspapers, magazine articles, narrative texts, texts of Projekt Gutenberg (Brunner et al., 2020) bert-europeanafurther-pretrained -contemporary, further pretrained on historical texts", "cite_spans": [ { "start": 20, "end": 36, "text": "(Schweter, 2020)", "ref_id": "BIBREF39" }, { "start": 202, "end": 224, "text": "(Brunner et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Based on bert-base-germaneuropeana-cased. Further pretrained with dramatic texts of GerDracor, TextGrid and other (300 MB) - Table 2 : Transformer-based models for the evaluation. Hugging Face-identifier can be used to retrieve the models from the Hugging Face-platform, bert-europeana-further-pretrained was created by the authors of this paper via further pretraining.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "in improvements. Indeed, the accuracies become slightly worse and significantly lower looking at settings with multiple classes (e.g. 29% for subemotions on the filtered corpus).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "As the results show, we can confirm general findings of NLP-research for classification tasks for various text genres, in the sense that transformerbased models perform better than traditional textual ML-approaches in our setting with German historical plays. However, we cannot confirm our assumption that models pretrained on historical language achieve better results because they are closer to the language of our annotated material. Indeed, the best performing models are gbert-large and gelectra-large by deepset (Chan et al., 2020) . These are, to our knowledge, the largest German models trained on contemporary texts, primarly internet texts. The difference between historical and these contemporary models is however small. Since the differences in the amount of text for the pretraining are significant (around 20 GB) it opens up the question if the performance of historical models improves with similarly large amounts of texts.", "cite_spans": [ { "start": 519, "end": 538, "text": "(Chan et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "5" }, { "text": "Considering the different corpus instances, we showed that filtering out overlapping annotations annotators disagree upon results into the strongest performance boost, although the training and test size become smaller. Thus, it is crucial for our project to find ways to deal with disagreements among annotators. Due to the varied and overlapping annotation lengths, we cannot rely on standard solutions like majority voting. Furthermore, the inherent subjectivity of literary texts and the resulting low agreement among annotators is a specific feature of these kind of texts. We do however think that we can reduce disagreement with further training of the annotators and also by implementing a subsequent step after the first annotations of two annotators, in which a literary scholar expert creates a consensus annotation resolving disagreement. Additionally, we intend to switch from single-label classification to multi-label emotion classification since this is more in line with the annotation process. This will open up further possibilities to deal with overlapping annotations and integrates this phenomenon into the classification task. Applying a heuristic to map single emotion classes to entire speeches led to the models performing rather poorly compared to the other corpus manifestations. For sub-emotion prediction with 13 classes, accuracies became 25% worse for certain models. While one reason is that this corpus is the smallest of all manifestations, we argue that the main problem is, as annotations showed, that most speeches consist of multiple, oftentimes differing emotion categories. Mapping them heuristically to one results in text units including various emotional expressions that are falsely mapped to one emotion. This problem intensifies due to the fact that many speeches are rather long and that the class imbalances for main classes and sub-emotions are significant. Thus, we plan to focus on smaller text unit sizes like sentences or n-grams in the future emotion prediction task over the entire corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "5" }, { "text": "Considering the results for filtered and full corpus, the transformer-based models achieve state-of-the-art accuracies for polarity classification (88-90%) compared to results with sentiment analysis with similar amounts of classes on contemporary German (Chan et al., 2020) . The results achieved by the transformer-based models for polarity are also around 20% above results on German dramatic texts predicted by lexicon-based sentiment analysis, which yields results around 70% . For the main class and sub-emotion classification, results are, however, for the best models (75% for main classes, 66% for sub-emotions), below state-of-the-art results on emotion classification tasks with 4 or more classes for contemporary English texts for which accuracies of up to 86% are reported (Shmueli and 2019; Yang et al., 2019; Cao et al., 2020) , however, for the most part, with larger training corpora and fewer classes as in our setting. We intend to improve the performance to satisfactory levels by hyperparameter-tuning and especially by exploring recommended ML-methods like over-and undersampling to deal with the class imbalances (Buda et al., 2018) , which is one of the main problems of the main class and sub-emotion classification.", "cite_spans": [ { "start": 255, "end": 274, "text": "(Chan et al., 2020)", "ref_id": "BIBREF5" }, { "start": 786, "end": 798, "text": "(Shmueli and", "ref_id": null }, { "start": 805, "end": 823, "text": "Yang et al., 2019;", "ref_id": "BIBREF46" }, { "start": 824, "end": 841, "text": "Cao et al., 2020)", "ref_id": "BIBREF4" }, { "start": 1136, "end": 1155, "text": "(Buda et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "5" }, { "text": "Among all transformer-based models, the berteuropeana-model further pretrained on dramatic texts yields the lowest accuracies. The performance becomes especially low for main classes and subemotions (see table 4). A reason might be that, while research argues that further pretraining with even low amount of texts can show improvements, the amount of text used in our setting (300 MB) is below the amounts reported in similar research (Kameswara Sarma et al., 2018; Gururangan et al., 2020; Rietzler et al., 2020) . The usage of solely dramatic texts instead of varied forms of texts for training might also lead to problems in generalizing the specific language of the annotated material. Furthermore, a significant proportion of the selected dramatic texts is actually of the middle to the end of the 19 th century and also of the beginning of the 20 th century. Thus, the language might again deviate strongly from the time span of our plays . This might also be a reason why the historical transformer-based models in our evaluation show no relevant improvements. Investigating the training corpora for these models (Schweter, 2020; Brunner et al., 2020) shows that large proportions of the texts are actually of the 19 th and 20 th century. For our future studies, we plan to continue our exploration of domain-specific fine-tuning by acquiring larger amounts of general text material (and not only dramatic texts) focused on the time span of our interest, 1650-1815, to train models from scratch and evaluate if we can identify performance improvements. We intend to achieve satisfactory levels of accuracies to perform large-scale analysis of emotion distributions and progressions for our entire corpus of around 300 plays. ", "cite_spans": [ { "start": 436, "end": 466, "text": "(Kameswara Sarma et al., 2018;", "ref_id": "BIBREF15" }, { "start": 467, "end": 491, "text": "Gururangan et al., 2020;", "ref_id": null }, { "start": 492, "end": 514, "text": "Rietzler et al., 2020)", "ref_id": "BIBREF26" }, { "start": 1121, "end": 1137, "text": "(Schweter, 2020;", "ref_id": "BIBREF39" }, { "start": 1138, "end": 1159, "text": "Brunner et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "5" }, { "text": "https://catma.de/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://lithes.uni-graz.at/maezene/ eberl_mandolettikraemer.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://scikit-learn.org/stable/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/ 5 https://deepset.ai/german-bert", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://textgrid.de/ digitale-bibliothek 7 https://simpletransformers.ai/ 8 Additional data about the results can be found via the following repository: https://github.com/ lauchblatt/Emotions_in_Drama", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We want to thank the following student annotators for their contributions to this project: Carlina Eizenberger, Viola Hipler, Emma Ru\u00df, Leon Sautter and Lisa Schattmann.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "This research is part of the project \"Emotions in Drama\" (Emotionen im Drama), funded by the German Research Foundation (DFG) and part of the priority programme SPP 2207 Computational Literary Studies (CLS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Funding", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Emotional Sequencing and Development in Fairy Tales", "authors": [ { "first": "Cecilia", "middle": [], "last": "Ovesdotter", "suffix": "" }, { "first": "Alm", "middle": [], "last": "", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2005, "venue": "Affective Computing and Intelligent Interaction", "volume": "", "issue": "", "pages": "668--674", "other_ids": { "DOI": [ "10.1007/11573548_86" ] }, "num": null, "urls": [], "raw_text": "Cecilia Ovesdotter Alm and Richard Sproat. 2005. Emotional Sequencing and Development in Fairy Tales. In Affective Computing and Intelligent Inter- action, Lecture Notes in Computer Science, pages 668-674, Berlin, Heidelberg. Springer.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "SciB-ERT: A Pretrained Language Model for Scientific Text", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.10676[cs].ArXiv:1903.10676" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A Pretrained Language Model for Scientific Text. arXiv:1903.10676 [cs]. ArXiv: 1903.10676.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "To bert or not to bert-comparing contextual embeddings in a deep learning architecture for the automatic recognition of four types of speech, thought and writing representation", "authors": [ { "first": "Annelen", "middle": [], "last": "Brunner", "suffix": "" }, { "first": "Ngoc", "middle": [ "Duyen" ], "last": "", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Weimer", "suffix": "" }, { "first": "Fotis", "middle": [], "last": "Jannidis", "suffix": "" } ], "year": 2020, "venue": "SwissText/KONVENS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annelen Brunner, Ngoc Duyen Tanja Tu, Lukas Weimer, and Fotis Jannidis. 2020. To bert or not to bert-comparing contextual embeddings in a deep learning architecture for the automatic recognition of four types of speech, thought and writing repre- sentation. In SwissText/KONVENS.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A systematic study of the class imbalance problem in convolutional neural networks", "authors": [ { "first": "Mateusz", "middle": [], "last": "Buda", "suffix": "" }, { "first": "Atsuto", "middle": [], "last": "Maki", "suffix": "" }, { "first": "Maciej", "middle": [ "A" ], "last": "Mazurowski", "suffix": "" } ], "year": 2018, "venue": "Neural Networks", "volume": "106", "issue": "", "pages": "249--259", "other_ids": { "DOI": [ "10.1016/j.neunet.2018.07.011" ] }, "num": null, "urls": [], "raw_text": "Mateusz Buda, Atsuto Maki, and Maciej A. Mazurowski. 2018. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249-259. ArXiv: 1710.05381.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A Survey of Emotion Analysis in Text Based on Deep Learning", "authors": [ { "first": "Lihong", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Sancheng", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Yongmei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Aimin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xinguang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE 8th International Conference on Smart City and Informatization (iSCI)", "volume": "", "issue": "", "pages": "81--88", "other_ids": { "DOI": [ "10.1109/iSCI50694.2020.00020" ] }, "num": null, "urls": [], "raw_text": "Lihong Cao, Sancheng Peng, Pengfei Yin, Yongmei Zhou, Aimin Yang, and Xinguang Li. 2020. A Sur- vey of Emotion Analysis in Text Based on Deep Learning. In 2020 IEEE 8th International Con- ference on Smart City and Informatization (iSCI), pages 81-88, Guangzhou, China. IEEE.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "German's Next Language Model", "authors": [ { "first": "Branden", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" }, { "first": "Timo", "middle": [], "last": "M\u00f6ller", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.10906[cs].ArXiv:2010.10906" ] }, "num": null, "urls": [], "raw_text": "Branden Chan, Stefan Schweter, and Timo M\u00f6ller. 2020. German's Next Language Model. arXiv:2010.10906 [cs]. ArXiv: 2010.10906.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "ELECTRA: Pretraining Text Encoders as Discriminators Rather Than Generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2019. ELECTRA: Pre- training Text Encoders as Discriminators Rather Than Generators.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Exploring Transformers in Emotion Recognition: a comparison of BERT", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.02041[cs].ArXiv:2104.02041" ] }, "num": null, "urls": [], "raw_text": "Diogo Cortiz. 2021. Exploring Transformers in Emotion Recognition: a comparison of BERT, DistillBERT, RoBERTa, XLNet and ELECTRA. arXiv:2104.02041 [cs]. ArXiv: 2104.02041.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sentiment Analysis Based on Deep Learning: A Comparative Study. Electronics", "authors": [ { "first": "", "middle": [], "last": "Nhan Cach Dang", "suffix": "" }, { "first": "N", "middle": [], "last": "Mar\u00eda", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Moreno-Garc\u00eda", "suffix": "" }, { "first": "", "middle": [], "last": "De La Prieta", "suffix": "" } ], "year": 2020, "venue": "", "volume": "9", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3390/electronics9030483" ] }, "num": null, "urls": [], "raw_text": "Nhan Cach Dang, Mar\u00eda N. Moreno-Garc\u00eda, and Fer- nando De la Prieta. 2020. Sentiment Analysis Based on Deep Learning: A Comparative Study. Electron- ics, 9(3):483. ArXiv: 2006.03541.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Lexicon-based sentiment analysis in german: Systematic evaluation of resources and preprocessing techniques", "authors": [ { "first": "Jakob", "middle": [], "last": "Fehle", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 17th Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jakob Fehle, Thomas Schmidt, and Christian Wolff. 2021. Lexicon-based sentiment analysis in german: Systematic evaluation of resources and preprocess- ing techniques. In Proceedings of the 17th Confer- ence on Natural Language Processing (KONVENS 2021), D\u00fcsseldorf, Germany.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Programmable Corpora: Introducing DraCor, an Infrastructure for the Research on European Drama", "authors": [ { "first": "Frank", "middle": [], "last": "Fischer", "suffix": "" }, { "first": "Ingo", "middle": [], "last": "B\u00f6rner", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "G\u00f6bel", "suffix": "" }, { "first": "Angelika", "middle": [], "last": "Hechtl", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Kittel", "suffix": "" }, { "first": "Carsten", "middle": [], "last": "Milling", "suffix": "" }, { "first": "Peer", "middle": [], "last": "Trilcke", "suffix": "" } ], "year": 2019, "venue": "Conference Name: Digital Humanities 2019: \"Complexities\" (DH2019) Publisher: Zenodo", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.4284002" ] }, "num": null, "urls": [], "raw_text": "Frank Fischer, Ingo B\u00f6rner, Mathias G\u00f6bel, Ange- lika Hechtl, Christopher Kittel, Carsten Milling, and Peer Trilcke. 2019. Programmable Corpora: Intro- ducing DraCor, an Infrastructure for the Research on European Drama. Conference Name: Digital Hu- manities 2019: \"Complexities\" (DH2019) Publisher: Zenodo.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "CATMA", "authors": [ { "first": "Evelyn", "middle": [], "last": "Gius", "suffix": "" }, { "first": "Jan", "middle": [ "Christoph" ], "last": "Meister", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Petris", "suffix": "" }, { "first": "Malte", "middle": [], "last": "Meister", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Bruck", "suffix": "" }, { "first": "Janina", "middle": [], "last": "Jacke", "suffix": "" }, { "first": "Mareike", "middle": [], "last": "Schuhmacher", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.4353618" ] }, "num": null, "urls": [], "raw_text": "Evelyn Gius, Jan Christoph Meister, Marco Petris, Malte Meister, Christian Bruck, Janina Jacke, Mareike Schuhmacher, Marie Fl\u00fch, and Jan Horstmann. 2020. CATMA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Comparing BERT against traditional machine learning text classification", "authors": [ { "first": "Santiago", "middle": [], "last": "Gonz\u00e1lez", "suffix": "" }, { "first": "-", "middle": [], "last": "Carvajal", "suffix": "" }, { "first": "Eduardo", "middle": [ "C" ], "last": "Garrido-Merch\u00e1n", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.13012[cs,stat].ArXiv:2005.13012" ] }, "num": null, "urls": [], "raw_text": "Santiago Gonz\u00e1lez-Carvajal and Eduardo C. Garrido- Merch\u00e1n. 2021. Comparing BERT against traditional machine learning text classification. arXiv:2005.13012 [cs, stat]. ArXiv: 2005.13012.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks", "authors": [ { "first": "Ana", "middle": [], "last": "Suchin Gururangan", "suffix": "" }, { "first": "Swabha", "middle": [], "last": "Marasovi\u0107", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Iz", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Downey", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.10964[cs].ArXiv:2004.10964" ] }, "num": null, "urls": [], "raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. arXiv:2004.10964 [cs]. ArXiv: 2004.10964.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Domain Adapted Word Embeddings for Improved Sentiment Classification", "authors": [ { "first": "Yingyu", "middle": [], "last": "Prathusha Kameswara Sarma", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Liang", "suffix": "" }, { "first": "", "middle": [], "last": "Sethares", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP", "volume": "", "issue": "", "pages": "51--59", "other_ids": { "DOI": [ "10.18653/v1/W18-3407" ] }, "num": null, "urls": [], "raw_text": "Prathusha Kameswara Sarma, Yingyu Liang, and Bill Sethares. 2018. Domain Adapted Word Embed- dings for Improved Sentiment Classification. In Proceedings of the Workshop on Deep Learning Ap- proaches for Low-Resource NLP, pages 51-59, Mel- bourne. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A Survey on Sentiment and Emotion Analysis for Computational Literary Studies. Zeitschrift f\u00fcr digitale Geisteswissenschaften", "authors": [ { "first": "Evgeny", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.17175/2019_008" ] }, "num": null, "urls": [], "raw_text": "Evgeny Kim and Roman Klinger. 2019. A Survey on Sentiment and Emotion Analysis for Computational Literary Studies. Zeitschrift f\u00fcr digitale Geisteswis- senschaften. ArXiv: 1808.03137.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BERT for Named Entity Recognition in Contemporary and Historical German", "authors": [ { "first": "Kai", "middle": [], "last": "Labusch", "suffix": "" }, { "first": "Clemens", "middle": [], "last": "Neudecker", "suffix": "" }, { "first": "David", "middle": [], "last": "Zellhofer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Labusch, Clemens Neudecker, and David Zellhofer. 2019. BERT for Named Entity Recognition in Con- temporary and Historical German. page 9.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Measurement of Observer Agreement for Categorical Data", "authors": [ { "first": "J", "middle": [], "last": "", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Landis", "suffix": "" }, { "first": "Gary", "middle": [ "G" ], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "33", "issue": "1", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Richard Landis and Gary G. Koch. 1977. The Mea- surement of Observer Agreement for Categorical Data. Biometrics, 33(1):159-174. Publisher: [Wi- ley, International Biometric Society].", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "From Once Upon a Time to Happily Ever After: Tracking Emotions in Novels and Fairy Tales", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities", "volume": "", "issue": "", "pages": "105--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad. 2011. From Once Upon a Time to Happily Ever After: Tracking Emotions in Novels and Fairy Tales. In Proceedings of the 5th ACL-HLT Workshop on Language Technology for Cultural Her- itage, Social Sciences, and Humanities, pages 105- 114, Portland, OR, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The evolution of sentiment analysis-A review of research topics, venues, and top cited papers", "authors": [ { "first": "Mika", "middle": [ "V" ], "last": "M\u00e4ntyl\u00e4", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Graziotin", "suffix": "" }, { "first": "Miikka", "middle": [], "last": "Kuutila", "suffix": "" } ], "year": 2018, "venue": "Computer Science Review", "volume": "27", "issue": "", "pages": "16--32", "other_ids": { "DOI": [ "10.1016/j.cosrev.2017.10.002" ] }, "num": null, "urls": [], "raw_text": "Mika V. M\u00e4ntyl\u00e4, Daniel Graziotin, and Miikka Kuu- tila. 2018. The evolution of sentiment analysis-A review of research topics, venues, and top cited pa- pers. Computer Science Review, 27:16-32.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Characterto-Character Sentiment Analysis in Shakespeare's Plays", "authors": [ { "first": "Eric", "middle": [ "T" ], "last": "Nalisnick", "suffix": "" }, { "first": "Henry", "middle": [ "S" ], "last": "Baird", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "479--483", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric T. Nalisnick and Henry S. Baird. 2013. Character- to-Character Sentiment Analysis in Shakespeare's Plays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 479-483, Sofia, Bul- garia. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Scikitlearn: Machine learning in Python", "authors": [ { "first": "Fabian", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "Ga\u00ebl", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Bertrand", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2011, "venue": "Journal of machine learning research", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, and others. 2011. Scikit- learn: Machine learning in Python. Journal of ma- chine learning research, 12(Oct):2825-2830.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Pre-trained Models for Natural Language Processing: A Survey", "authors": [ { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Tianxiang", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yige", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yunfan", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Ning", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.08271[cs].ArXiv:2003.08271" ] }, "num": null, "urls": [], "raw_text": "Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained Models for Natural Language Processing: A Survey. arXiv:2003.08271 [cs]. ArXiv: 2003.08271.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The emotional arcs of stories are dominated by six basic shapes", "authors": [ { "first": "Andrew", "middle": [ "J" ], "last": "Reagan", "suffix": "" }, { "first": "Lewis", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dilan", "middle": [], "last": "Kiley", "suffix": "" }, { "first": "Christopher", "middle": [ "M" ], "last": "Danforth", "suffix": "" }, { "first": "Peter", "middle": [ "Sheridan" ], "last": "Dodds", "suffix": "" } ], "year": 2016, "venue": "EPJ Data Science", "volume": "5", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1140/epjds/s13688-016-0093-1" ] }, "num": null, "urls": [], "raw_text": "Andrew J. Reagan, Lewis Mitchell, Dilan Kiley, Christopher M. Danforth, and Peter Sheridan Dodds. 2016. The emotional arcs of stories are dominated by six basic shapes. EPJ Data Science, 5(1):31. ArXiv: 1606.07772.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Anleitung zur Erstellung von Annotationsrichtlinien", "authors": [ { "first": "Nils", "middle": [], "last": "Reiter", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "193--202", "other_ids": { "DOI": [ "10.1515/9783110693973-009" ] }, "num": null, "urls": [], "raw_text": "Nils Reiter. 2020. Anleitung zur Erstellung von Anno- tationsrichtlinien, pages 193-202. De Gruyter.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification", "authors": [ { "first": "Alexander", "middle": [], "last": "Rietzler", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Stabinger", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Opitz", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Engl", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4933--4941", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Rietzler, Sebastian Stabinger, Paul Opitz, and Stefan Engl. 2020. Adapt or Get Left Be- hind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Clas- sification. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4933- 4941, Marseille, France. European Language Re- sources Association.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Distant reading sentiments and emotions in historic german plays", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" } ], "year": 2019, "venue": "Abstract Booklet, DH_Budapest_2019", "volume": "", "issue": "", "pages": "57--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Schmidt. 2019. Distant reading sentiments and emotions in historic german plays. In Ab- stract Booklet, DH_Budapest_2019, pages 57-60. Budapest, Hungary.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "An Evaluation of Lexicon-based Sentiment Analysis Techniques for the Plays of Gotthold Ephraim Lessing", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Burghardt", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Second Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature", "volume": "", "issue": "", "pages": "139--149", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Schmidt and Manuel Burghardt. 2018. An Evaluation of Lexicon-based Sentiment Analysis Techniques for the Plays of Gotthold Ephraim Less- ing. In Proceedings of the Second Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Litera- ture, pages 139-149, Santa Fe, New Mexico. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Sentiment Annotation of Historic German Plays: An Empirical Study on Annotation Behavior", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Burghardt", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Dennerlein", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop on Annotation in Digital", "volume": "", "issue": "", "pages": "47--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Manuel Burghardt, and Katrin Den- nerlein. 2018. Sentiment Annotation of Historic German Plays: An Empirical Study on Annotation Behavior. In Sandra K\u00fcbler and Heike Zinsmeister, editors, Proceedings of the Workshop on Annotation in Digital Humanities 2018 (annDH 2018), pages 47-52. RWTH Aachen, Sofia, Bulgaria.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Sentiment Annotation for Lessing's Plays: Towards a Language Resource for Sentiment Analysis on German Literary Texts", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Burghardt", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Dennerlein", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2019, "venue": "Thierry Declerck and John P. McCrae, editors, 2nd Conference on Language, Data and Knowledge (LDK 2019)", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Manuel Burghardt, Katrin Denner- lein, and Christian Wolff. 2019a. Sentiment Anno- tation for Lessing's Plays: Towards a Language Re- source for Sentiment Analysis on German Literary Texts. In Thierry Declerck and John P. McCrae, edi- tors, 2nd Conference on Language, Data and Knowl- edge (LDK 2019), pages 45-50. Leipzig, Germany.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Toward Multimodal Sentiment Analysis of Historic Plays: A Case Study with Text and Audio for Lessing's Emilia Galotti", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Burghardt", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Digital Humanities in the Nordic Countries 4th Conference", "volume": "2364", "issue": "", "pages": "405--414", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Manuel Burghardt, and Christian Wolff. 2019b. Toward Multimodal Sentiment Anal- ysis of Historic Plays: A Case Study with Text and Audio for Lessing's Emilia Galotti. In Proceedings of the Digital Humanities in the Nordic Countries 4th Conference, volume 2364 of CEUR Workshop Proceedings, pages 405-414, Copenhagen, Den- mark. CEUR-WS.org.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Senttext: A tool for lexicon-based sentiment analysis in digital humanities", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Johanna", "middle": [], "last": "Dangel", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2021, "venue": "Information Science and its Neighbors from Data Science to Digital Humanities. Proceedings of the 16th International Symposium of Information Science (ISI 2021)", "volume": "74", "issue": "", "pages": "156--172", "other_ids": { "DOI": [ "10.5283/epub.44943" ] }, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Johanna Dangel, and Christian Wolff. 2021a. Senttext: A tool for lexicon-based sentiment analysis in digital humanities. In Thomas Schmidt and Christian Wolff, editors, Information Science and its Neighbors from Data Science to Digital Humanities. Proceedings of the 16th International Symposium of Information Science (ISI 2021), vol- ume 74, pages 156-172. Werner H\u00fclsbusch, Gl\u00fcck- stadt.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Towards a Corpus of Historical German Plays with Emotion Annotations", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Dennerlein", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2021, "venue": "3rd Conference on Language, Data and Knowledge (LDK 2021)", "volume": "93", "issue": "", "pages": "1--9", "other_ids": { "DOI": [ "10.4230/OASIcs.LDK.2021.9" ] }, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Katrin Dennerlein, and Christian Wolff. 2021b. Towards a Corpus of Historical Ger- man Plays with Emotion Annotations. In 3rd Con- ference on Language, Data and Knowledge (LDK 2021), volume 93 of Open Access Series in Informat- ics (OASIcs), pages 9:1-9:11, Dagstuhl, Germany. Schloss Dagstuhl -Leibniz-Zentrum f\u00fcr Informatik.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Using Deep Learning for Emotion Analysis of 18th and 19th Century German Plays", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Dennerlein", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.26298/melusina.8f8w-y749-udlf" ] }, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Katrin Dennerlein, and Christian Wolff. 2021c. Using Deep Learning for Emotion Analysis of 18th and 19th Century German Plays. In Manuel Burghardt, Lisa Dieckmann, Timo Steyer, Peer Trilcke, Niels-Oliver Walkowski, Jo\u00eblle Weis, and Ulrike Wuttke, editors, Fabrikation von Erken- ntnis. Experimente in den Digital Humanities.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Comparing live sentiment annotation of movies via arduino and a slider with textual annotation of subtitles", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Isabella", "middle": [], "last": "Engl", "suffix": "" }, { "first": "David", "middle": [], "last": "Halbhuber", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2020, "venue": "Post-Proceedings of the 5th Conference Digital Humanities in the Nordic Countries (DHN 2020)", "volume": "", "issue": "", "pages": "212--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Isabella Engl, David Halbhuber, and Christian Wolff. 2020a. Comparing live sentiment annotation of movies via arduino and a slider with textual annotation of subtitles. In Post-Proceedings of the 5th Conference Digital Humanities in the Nordic Countries (DHN 2020), pages 212-223.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Distant reading of religious online communities: A case study for three religious forums on reddit", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Kaindl", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Digital Humanities in the Nordic Countries 5th Conference (DHN 2020)", "volume": "", "issue": "", "pages": "157--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Florian Kaindl, and Christian Wolff. 2020b. Distant reading of religious online commu- nities: A case study for three religious forums on reddit. In Proceedings of the Digital Humanities in the Nordic Countries 5th Conference (DHN 2020), pages 157-172, Riga, Latvia.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Inter-rater agreement and usability: A comparative evaluation of annotation tools for sentiment annotation", "authors": [ { "first": "Thomas", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Brigitte", "middle": [], "last": "Winterl", "suffix": "" }, { "first": "Milena", "middle": [], "last": "Maul", "suffix": "" }, { "first": "Alina", "middle": [], "last": "Schark", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Vlad", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Wolff", "suffix": "" } ], "year": 2019, "venue": "", "volume": "2019", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18420/inf2019_ws12" ] }, "num": null, "urls": [], "raw_text": "Thomas Schmidt, Brigitte Winterl, Milena Maul, Alina Schark, Andrea Vlad, and Christian Wolff. 2019c. Inter-rater agreement and usability: A comparative evaluation of annotation tools for sen- timent annotation. In INFORMATIK 2019: 50", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Jahre Gesellschaft f\u00fcr Informatik -Informatik f\u00fcr Gesellschaft (Workshop-Beitr\u00e4ge)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "121--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jahre Gesellschaft f\u00fcr Informatik -Informatik f\u00fcr Gesellschaft (Workshop-Beitr\u00e4ge), pages 121-133, Bonn. Gesellschaft f\u00fcr Informatik e.V.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Europeana BERT and ELEC-TRA models", "authors": [ { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.4275044" ] }, "num": null, "urls": [], "raw_text": "Stefan Schweter. 2020. Europeana BERT and ELEC- TRA models.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Towards Robust Named Entity Recognition for Historic German", "authors": [ { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Baiter", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 4th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "96--103", "other_ids": { "DOI": [ "10.18653/v1/W19-4312" ] }, "num": null, "urls": [], "raw_text": "Stefan Schweter and Johannes Baiter. 2019. Towards Robust Named Entity Recognition for Historic Ger- man. In Proceedings of the 4th Workshop on Rep- resentation Learning for NLP (RepL4NLP-2019), pages 96-103, Florence, Italy. Association for Com- putational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats", "authors": [ { "first": "Boaz", "middle": [], "last": "Shmueli", "suffix": "" }, { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.07734[cs].ArXiv:1909.07734" ] }, "num": null, "urls": [], "raw_text": "Boaz Shmueli and Lun-Wei Ku. 2019. SocialNLP EmotionX 2019 Challenge Overview: Predict- ing Emotions in Spoken Dialogues and Chats. arXiv:1909.07734 [cs]. ArXiv: 1909.07734.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Towards sentiment analysis for historical texts", "authors": [ { "first": "Rachele", "middle": [], "last": "Sprugnoli", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Tonelli", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Marchetti", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Moretti", "suffix": "" } ], "year": 2015, "venue": "Digital Scholarship in the Humanities", "volume": "31", "issue": "", "pages": "762--772", "other_ids": { "DOI": [ "10.1093/llc/fqv027" ] }, "num": null, "urls": [], "raw_text": "Rachele Sprugnoli, Sara Tonelli, Alessandro Marchetti, and Giovanni Moretti. 2015. Towards sentiment analysis for historical texts. Digital Scholarship in the Humanities, 31:762-772. Publisher: Oxford : Oxford University Press.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. HuggingFace's Transformers: State-of-the-art Natural Language Processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R\u00e9mi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "", "middle": [], "last": "Gugger", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.03771" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. HuggingFace's Transformers: State-of-the-art Nat- ural Language Processing. arXiv:1910.03771 [cs].", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "A Comparison of Emotion Annotation Approaches for Text. Information", "authors": [ { "first": "Ian", "middle": [], "last": "Wood", "suffix": "" }, { "first": "John", "middle": [], "last": "Mccrae", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Andryushechkin", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2018, "venue": "", "volume": "9", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3390/info9050117" ] }, "num": null, "urls": [], "raw_text": "Ian Wood, John McCrae, Vladimir Andryushechkin, and Paul Buitelaar. 2018a. A Comparison of Emo- tion Annotation Approaches for Text. Information, 9(5):117.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "A Comparison Of Emotion Annotation Schemes And A New Annotated Data Set", "authors": [ { "first": "Ian", "middle": [], "last": "Wood", "suffix": "" }, { "first": "John", "middle": [ "P" ], "last": "Mccrae", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Andryushechkin", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Buitelaar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Wood, John P. McCrae, Vladimir Andryushechkin, and Paul Buitelaar. 2018b. A Comparison Of Emo- tion Annotation Schemes And A New Annotated Data Set. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "EmotionX-KU: BERT-Max based Contextual Emotion Classifier", "authors": [ { "first": "Kisu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dongyub", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Taesun", "middle": [], "last": "Whang", "suffix": "" }, { "first": "Seolhwa", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Heuiseok", "middle": [], "last": "Lim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.11565[cs].ArXiv:1906.11565" ] }, "num": null, "urls": [], "raw_text": "Kisu Yang, Dongyub Lee, Taesun Whang, Seolhwa Lee, and Heuiseok Lim. 2019. EmotionX-KU: BERT-Max based Contextual Emotion Classifier. arXiv:1906.11565 [cs]. ArXiv: 1906.11565.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Andreas Hotho, Isabella Reger, and Fotis Jannidis", "authors": [ { "first": "Albin", "middle": [], "last": "Zehe", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Becker", "suffix": "" }, { "first": "Lena", "middle": [], "last": "Hettinger", "suffix": "" } ], "year": 2016, "venue": "DMNLP@PKDD/ECML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Albin Zehe, Martin Becker, Lena Hettinger, Andreas Hotho, Isabella Reger, and Fotis Jannidis. 2016. Pre- diction of Happy Endings in German Novels. In DMNLP@PKDD/ECML.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Challenges in Annotation: Annotator Experiences from a Crowdsourced Emotion Annotation Task", "authors": [ { "first": "Emily", "middle": [], "last": "\u00d6hman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Digital Humanities in the Nordic Countries 5th Conference", "volume": "", "issue": "", "pages": "293--301", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily \u00d6hman. 2020. Challenges in Annotation: An- notator Experiences from a Crowdsourced Emotion Annotation Task. In Proceedings of the Digital Hu- manities in the Nordic Countries 5th Conference, pages 293-301. CEUR Workshop Proceedings.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Example annotation in CATMA. The annotator marked two lines as suffering (purple), and the last part additionally as love (blue). (Excerpt from Canut)", "num": null }, "TABREF0": { "content": "
/com-
edy)
\u2022 Der Postzug by Ayrenhoff (1769/comedy)
\u2022 Kabale und Liebe by Schiller (1784/tragedy)
\u2022 Kasperl' der Mandolettikr\u00e4mer by Eberl
(1789/tragedy)
\u2022 Menschenhass und Reue by Kotzebue
(1790/comedy)
\u2022 Faust by Goethe (1807/tragedy)
", "html": null, "num": null, "text": "Das Testament by Gottsched (1745/comedy) \u2022 Canut by Schlegel (1746/tragedy) Most of the plays were acquired as part of the GerDracor-Corpus (Fischer et al., 2019) except for Kasperl' der Mandolettikr\u00e4mer which was acquired via an open web repository. 2", "type_str": "table" }, "TABREF2": { "content": "", "html": null, "num": null, "text": "Distribution of emotion categories. First, the summed results of the main classes (MC; marked in bold) are listed followed by the sub-emotions. Percentages are rounded.", "type_str": "table" }, "TABREF5": { "content": "
Methodacc (pol)F1 (pol)acc (t-p)F1 (t-p)acc (m-c)F1 (m-c)acc (s-e)F1 (s-e)
random baseline0.50-0.33-0.17-0.08-
majority baseline0.60-0.55-0.25-0.15-
bow-svm0.770.750.700.660.530.510.410.38
bow-bayes0.830.830.760.740.590.560.460.41
bert-base0.880.880.830.830.700.700.610.60
bert-europeana0.880.880.830.830.710.700.600.59
electra-europeana0.890.890.830.830.700.690.560.53
bert-historical-rw0.880.880.830.830.720.720.630.63
gbert-large0.890.890.840.840.750.750.660.66
gelectra-large0.900.900.850.850.740.740.640.63
bert-europeana-further-pretrained0.830.830.760.740.450.380.290.23
", "html": null, "num": null, "text": "Evaluation results for the full corpus. F1-scores are weighted F1. pol=polarity, t-p=triple polarity, m-c=main class, s-e=sub-emotion. Best result per classification is marked in bold for accuracies.", "type_str": "table" }, "TABREF6": { "content": "", "html": null, "num": null, "text": "Evaluation results for the filtered corpus. F1-scores are weighted F1. pol=polarity, t-p=triple polarity, m-c=main class, s-e=sub-emotion. Best result per classification is marked in bold for accuracies.", "type_str": "table" }, "TABREF8": { "content": "
", "html": null, "num": null, "text": "Evaluation results for the speech corpus. F1-scores are weighted F1. pol=polarity, t-p=triple polarity, m-c=main class, s-e=sub-emotion. Best result per classification is marked in bold for accuracies.", "type_str": "table" }, "TABREF10": { "content": "
Emotion categoryabsolute %
desire280
love1,03214
friendship1852
admiration4686
joy1,10315
Schadenfreude1812
fear3905
despair1602
anger1,00213
hate, disgust6909
suffering1,04514
compassion3134
emotional movement9069
Overall7,503 100
", "html": null, "num": null, "text": "Distributions of main classes for the filtered corpus. Percentages are rounded.", "type_str": "table" }, "TABREF11": { "content": "
Emotion categoryabsolute %
MC: emotions of affection1,19818
desire270
love6029
friendship1262
admiration4417
MC: emotions of joy1,08816
joy88113
Schadenfreude2013
MC: emotions of fear72511
fear3916
despair3395
MC: emotions of rejection1,53823
anger91914
hate, disgust66010
MC: emotions of suffering1,17517
suffering83312
compassion2974
emotional movement1,02215
Overall6,741 100
", "html": null, "num": null, "text": "Distributions of sub-emotions for the filtered corpus. Polarity distribution is 6,018 negative (60%) and 3,944 positive (40%). Percentages are rounded.", "type_str": "table" }, "TABREF12": { "content": "", "html": null, "num": null, "text": "Distributions of emotions for the speech corpus. Polarity distribution is 3,414 negative (60%) and 2,305 positive (40%). Percentages are rounded.", "type_str": "table" } } } }