ACL-OCL / Base_JSON /prefixE /json /ecnlp /2020.ecnlp-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:49.136528Z"
},
"title": "A Deep Learning System for Sentiment Analysis of Service Calls",
"authors": [
{
"first": "Yanan",
"middle": [],
"last": "Jia",
"suffix": "",
"affiliation": {},
"email": "yjia@businessolver.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sentiment analysis is crucial for the advancement of artificial intelligence (AI). Sentiment understanding can help AI to replicate human language and discourse. Studying the formation and response of sentiment state from well-trained Customer Service Representatives (CSRs) can help make the interaction between humans and AI more intelligent. In this paper, a sentiment analysis pipeline is first carried out with respect to real-world multi-party conversations-that is, service calls. Based on the acoustic and linguistic features extracted from the source information, a novel aggregated method for voice sentiment recognition framework is built. Each party's sentiment pattern during the communication is investigated along with the interaction sentiment pattern between all parties.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Sentiment analysis is crucial for the advancement of artificial intelligence (AI). Sentiment understanding can help AI to replicate human language and discourse. Studying the formation and response of sentiment state from well-trained Customer Service Representatives (CSRs) can help make the interaction between humans and AI more intelligent. In this paper, a sentiment analysis pipeline is first carried out with respect to real-world multi-party conversations-that is, service calls. Based on the acoustic and linguistic features extracted from the source information, a novel aggregated method for voice sentiment recognition framework is built. Each party's sentiment pattern during the communication is investigated along with the interaction sentiment pattern between all parties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The natural reference for AI systems is human behavior. In human social life, emotional intelligence is important for successful and effective communication. Humans have the natural ability to comprehend and react to the emotions of their communication partners through vocal and facial expressions (Kotti and Patern\u00f2, 2012; Poria et al., 2014a) . A long-standing goal of AI has been to create affective agents that can recognize, interpret and express emotions. Early-stage research in affective computing and sentiment analysis has mainly focused on understanding affect towards entities such as movie, product, service, candidacy, organization, action and so on in monologues, which involves only one person's opinion. However, with the advent of Human-Robot Interaction (HRI) such as voice assistants and customer service chatbots, researchers have started to build empathetic dialogue systems to improve the overall HRI experience by adapting to customers' sentiment.",
"cite_spans": [
{
"start": 299,
"end": 324,
"text": "(Kotti and Patern\u00f2, 2012;",
"ref_id": "BIBREF15"
},
{
"start": 325,
"end": 345,
"text": "Poria et al., 2014a)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sentiment study of Human-Human Interactions (HHI) can help machines identify and react to human non-verbal communication which makes the HRI experience more natural. The call center is a rich resource of communication data. A large number of calls are recorded daily in order to assess the quality of interactions between CSRs and customers. Learning the sentiment expressions from well-trained CSRs during communication can help AI understand not only what the user says, but also how he/she says it so that the interaction feels more human. In this paper, we target and use real-world data -service calls, which poses additional challenges with respect to the artificial datasets that have been typically used in the past in multimodal sentiment researches (Cambria et al., 2017) , such as variability and noises. The basic 'sentiment' can be described on a scale of approval or disapproval, good or bad, positive or negative, and termed polarity (Poria et al., 2014b) . In the service industry, the key task is to enhance the quality of services by identifying issues that may be caused by systems of rules, or service qualities. These issues are usually expressed by a caller's anger or disappointment on a call. In addition, service chatbots are widely used to answer customer calls. If customers get angry during HRI, the system should be able to transfer the customers to a live agent. In this study, we mainly focuses on identifying 'negative' sentiment, especially 'angry' customers. Given the non-homogeneous nature of full call recordings, which typically include a mixture of negative, and nonnegative statements, sentiment analysis is addressed at the sentence level. Call segments are explored in both acoustic and linguistic modalities. The temporal sentiment patterns between customers and CSRs appearing in calls are described. The paper is organized as follows: Section 2 covers a brief literature review on sentiment recognition from different modalities; Section 3 proposes a pipeline which features our novelties in training data creation using real-world multi-party conversations, including a description of the data acquisition, speaker diarization, transcription, and semisupervised learning annotation; the methodologies for acoustic and linguistic sentiment analysis are presented in Section 4; Section 5 illustrates the methodologies adopted for fusing different modalities; Section 6 presents experimental results including the evaluation measures and temporal sentiment patterns; finally, Section 7 concludes the paper and outlines future work.",
"cite_spans": [
{
"start": 759,
"end": 781,
"text": "(Cambria et al., 2017)",
"ref_id": null
},
{
"start": 949,
"end": 970,
"text": "(Poria et al., 2014b)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we provide a brief overview of related work about text-based and audio-based sentiment analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Sentiment analysis has focused primarily on the processing of text and mainly consists of either rulebased classifiers that make use of large sentiment lexicons, or data-driven methods that assume the availability of a large annotated corpora. Sentiment lexicon is a list of lexical features (e.g. words) which are generally labeled according to their semantic orientation as either positive or negative (Liu, 2010) . Widely used lexicons include binary polarity-based lexicons, such as Harvard General Inquirer (Stone et al., 1966) , Linguistic Inquiry and Word Count (LIWC, pronounced 'Luke') (Pennebaker et al., 2007 (Pennebaker et al., , 2001 , Bing (Liu, 2012) , and valence-based lexicons, such as AFINN (Nielsen, 2011) , SentiWordNet (Alhazmi et al., 2013) , and SnticNet (Cambria et al., 2010) . Employing these lexical, researchers can apply their own rules or use existing rule-based modeling, such as VADER (Hutto and Gilbert, 2015) , to do sentiment analysis. One big advantage for the rule-based models is that these approaches require no training data and generalize to multiple domains. However, since words are annotated based on their context-free semantic orientation, word-sense disambiguation (Hutto and Gilbert, 2015) may occur when the word has multiple meanings. For example, words like 'defeated', 'envious', and 'stunned' are classified as 'positive' in Bing, but '-2' (negative) in AFINN. Although the rule-based algorithm is known to be noisy and limited, a sentiment lexicon is a useful component for any sophisticated sentiment detection algorithm and is one of the main resources to start from (Poria et al., 2014b) . Another major line of work in sentiment analysis consists of data-driven methods based on a large dataset annotated for polarity. The most widely used datasets include the MPQA corpus which is a collection of manually annotated news articles , movie reviews with two polarity (Pang and Lee, 2004a) , a collection of newspaper headlines annotated for polarity (Strapparava and Mihalcea, 2007) . With a large annotated datasets, supervised classifiers have been applied (Go et al., 2009; Pang and Lee, 2004b; dos Santos and Gatti, 2014; Socher et al., 2013; Wang et al., 2016) . Such approaches step away from blind use of keywords and word cooccurrence count, but rather rely on the implicit features associated with large semantic knowledge bases (Cambria et al., 2015) .",
"cite_spans": [
{
"start": 404,
"end": 415,
"text": "(Liu, 2010)",
"ref_id": "BIBREF16"
},
{
"start": 512,
"end": 532,
"text": "(Stone et al., 1966)",
"ref_id": "BIBREF41"
},
{
"start": 595,
"end": 619,
"text": "(Pennebaker et al., 2007",
"ref_id": "BIBREF28"
},
{
"start": 620,
"end": 646,
"text": "(Pennebaker et al., , 2001",
"ref_id": "BIBREF29"
},
{
"start": 654,
"end": 665,
"text": "(Liu, 2012)",
"ref_id": "BIBREF17"
},
{
"start": 710,
"end": 725,
"text": "(Nielsen, 2011)",
"ref_id": "BIBREF23"
},
{
"start": 741,
"end": 763,
"text": "(Alhazmi et al., 2013)",
"ref_id": null
},
{
"start": 779,
"end": 801,
"text": "(Cambria et al., 2010)",
"ref_id": "BIBREF7"
},
{
"start": 918,
"end": 943,
"text": "(Hutto and Gilbert, 2015)",
"ref_id": "BIBREF13"
},
{
"start": 1213,
"end": 1238,
"text": "(Hutto and Gilbert, 2015)",
"ref_id": "BIBREF13"
},
{
"start": 1624,
"end": 1645,
"text": "(Poria et al., 2014b)",
"ref_id": "BIBREF34"
},
{
"start": 1924,
"end": 1945,
"text": "(Pang and Lee, 2004a)",
"ref_id": "BIBREF26"
},
{
"start": 2007,
"end": 2039,
"text": "(Strapparava and Mihalcea, 2007)",
"ref_id": "BIBREF42"
},
{
"start": 2116,
"end": 2133,
"text": "(Go et al., 2009;",
"ref_id": "BIBREF11"
},
{
"start": 2134,
"end": 2154,
"text": "Pang and Lee, 2004b;",
"ref_id": "BIBREF27"
},
{
"start": 2155,
"end": 2182,
"text": "dos Santos and Gatti, 2014;",
"ref_id": "BIBREF36"
},
{
"start": 2183,
"end": 2203,
"text": "Socher et al., 2013;",
"ref_id": "BIBREF40"
},
{
"start": 2204,
"end": 2222,
"text": "Wang et al., 2016)",
"ref_id": "BIBREF44"
},
{
"start": 2395,
"end": 2417,
"text": "(Cambria et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text-based Sentiment Analysis",
"sec_num": "2.1"
},
{
"text": "Vocal expression is a primary carrier of affective signals in human communication. Speech as signals contains several features that can extract linguistic, speaker-specific information, and emotional. Related work about audio-based sentiment analysis along with multimodal fusion is reviewed in this section. Studies on speech-based sentiment analysis have focused on identifying relevant acoustic features. Use open source software such as OpenEAR (Eyben et al., 2009) , openSMILE (Eyben et al., 2010) , JAudio toolkit (McEnnis et al., 2005) or library packages (McFee et al., 2015; Sueur et al., 2008) to extract features. These features along with some of their statistical derivates are closely related to the vocal prosodic characteristics, such as a tone, a volume, a pitch, an intonation, an inflection, a duration, etc. Supervised or unsupervised classifiers can be fitted based on the statistical derivates of these features (Jain et al., 2018; Pan et al., 2012) . Sequence models can be fitted based on filter banks, Melfrequency cepstral coefficients (MFCCs), or other low-level descriptors extracted from raw speech without feature engineering (Aguilar et al., 2019) . However, this approach usually requires highly efficient computation and large annotated audio files. Multimodal sentiment analysis has started to draw attention recently because of the unlimited multimodality source of information online, such as videos and audios (Cambria et al., 2017; Poria, 2016; Poria et al., 2015) . Most of the multimodal sentiment analysis is focused on monologue videos. In the last few years, sentiment recognition in conversations has started to gain research interest, since reproducing human interaction requires a deep understanding of conversations, and sentiment plays a pivotal role in conversations. The existing conversation datasets are usually recorded in a controlled environment, such as a lab, and segmented into utterances, transcribe to text and annotated with emotion or sentiment labels manually. Widely used datasets include AMI Meeting Corpus (Carletta et al., 2006) , IEMOCAP (Busso et al., 2008) , SEMAINE (Mckeown et al., 2013) and AVEC (Schuller et al., 2012) . Recently, a few recurrent neural network (RNN) models are developed for emotion detection in conversations, e.g. DialogueRNN or ICON (Hazarika et al., 2018) . However they are less accurate in emotion detection for the utterances with emotional shift and the training data requires the speaker information. The conversation models are not employed in our polarity sentiment analysis because of the quality of the data and the approach used to gain the training data. More detailed explanations can be found in Section 3.4. At the heart of any multimodal sentiment analysis engine is the multimodal fusion (Shan et al., 2007; Zeng et al., 2007) . The multimodal fusion integrates all single modalities into a combined single representation. Features are extracted from each modality of the data independently. Decisionlevel fusion feeds the features of each modality into separate classifiers and then combines their decisions. Feature-level fusion concatenates the feature vectors obtained from all modalities and feeds the resulting long vector into a supervised classifier. Recent research on multimodal fusion for sentiment recognition has been conducted at either the feature level or decision level (Poria, 2016; Poria et al., 2015) .",
"cite_spans": [
{
"start": 449,
"end": 469,
"text": "(Eyben et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 482,
"end": 502,
"text": "(Eyben et al., 2010)",
"ref_id": "BIBREF9"
},
{
"start": 520,
"end": 542,
"text": "(McEnnis et al., 2005)",
"ref_id": "BIBREF19"
},
{
"start": 563,
"end": 583,
"text": "(McFee et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 584,
"end": 603,
"text": "Sueur et al., 2008)",
"ref_id": "BIBREF43"
},
{
"start": 934,
"end": 953,
"text": "(Jain et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 954,
"end": 971,
"text": "Pan et al., 2012)",
"ref_id": "BIBREF25"
},
{
"start": 1156,
"end": 1178,
"text": "(Aguilar et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 1447,
"end": 1469,
"text": "(Cambria et al., 2017;",
"ref_id": null
},
{
"start": 1470,
"end": 1482,
"text": "Poria, 2016;",
"ref_id": "BIBREF30"
},
{
"start": 1483,
"end": 1502,
"text": "Poria et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 2072,
"end": 2095,
"text": "(Carletta et al., 2006)",
"ref_id": "BIBREF8"
},
{
"start": 2106,
"end": 2126,
"text": "(Busso et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 2137,
"end": 2159,
"text": "(Mckeown et al., 2013)",
"ref_id": "BIBREF22"
},
{
"start": 2169,
"end": 2192,
"text": "(Schuller et al., 2012)",
"ref_id": "BIBREF37"
},
{
"start": 2328,
"end": 2351,
"text": "(Hazarika et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 2800,
"end": 2819,
"text": "(Shan et al., 2007;",
"ref_id": "BIBREF39"
},
{
"start": 2820,
"end": 2838,
"text": "Zeng et al., 2007)",
"ref_id": "BIBREF47"
},
{
"start": 3399,
"end": 3412,
"text": "(Poria, 2016;",
"ref_id": "BIBREF30"
},
{
"start": 3413,
"end": 3432,
"text": "Poria et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Audio-based Sentiment Analysis",
"sec_num": "2.2"
},
{
"text": "The data resources used for our experiments are described in Section 3.1. Data preparation including speech transcription and speaker diarization is discussed in Section 3.2. The sentiment annotation guideline is introduced in Section 3.3. Section 3.4 presents a semi-supervised learning annotation pipeline that chains data preparation, model training, model deploying and data monitor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Pipeline",
"sec_num": "3"
},
{
"text": "The main dataset we created in this paper consists of service calls collected from a health care benefits Call Center (named BSCD). Calls are focused on customers looking for help or support with company provided benefits such as health insurance. 500 calls are collected from the call center database covering diverse topics, such as insurance plan information, insurance id card, dependent coverage, etc. The call dataset has female and male speakers randomly selected with their age ranging approximately from 16-80. Calls involving translators are eliminated to keep only speakers expressing themselves in English. All the calls are presented in Wave format with a sample rate of 8000 Hertz and duration varying from 4 minutes to 26 minutes. All calls are pre-processed to eliminate repetitive introductions. The beginning of each call contains an introduction of the users' company name by a robot. To address this issue, the segment before the first pause (silence duration > 1 second) is removed from each call. A robust computational model of sentiment analysis needs to be able to handle real-world variability and noises. While the previous researches on multimodal sentiment or emotion analysis use audio and visual recorded in laboratory settings (Busso et al., 2008; Mckeown et al., 2010 Mckeown et al., , 2013 ; the BSCD gathers real-world calls which contain ambient noise present in most audio recordings, as well as diversity in person-to-person communication patterns. Both of these conditions result in difficulties that need to be addressed in order to effectively extract useful data from these sources.",
"cite_spans": [
{
"start": 1259,
"end": 1279,
"text": "(Busso et al., 2008;",
"ref_id": "BIBREF3"
},
{
"start": 1280,
"end": 1300,
"text": "Mckeown et al., 2010",
"ref_id": "BIBREF21"
},
{
"start": 1301,
"end": 1323,
"text": "Mckeown et al., , 2013",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BSCD: Benefits Service Call Dataset",
"sec_num": "3.1"
},
{
"text": "To discard noise and long pauses (silence duration > 5 seconds) in calls, Voice Activity Detection (VAD) is applied, followed by the application of Automatic Speech Recognition (ASR) and Automatic Speaker Diarization (ASD) to transcribe the verbal statements, extract the start and end time of each utterance, and identify the speaker of each utterance. Each call is segmented into an average of 69 utterances. The duration of the utterances is right-skewed with a median of 2.9 seconds; first and third quantiles 1.6 and 5.1 seconds. By searching keywords such as 'How can I help' in the content of each utterance, speakers are labeled ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "3.2"
},
{
"text": "Figure 1: Data preparation workflow as CSR or customer. Each utterance is linked to the corresponding audio stream, auto transcription, as well as speaker label. The workflow and corresponding results for the first 23 seconds of one selected call are shown in Figure 1 , where the original input is a call audio sample. After data preparation, segments of noise and silence are discarded. This call sample is segmented into 4 utterances. The audio streams are from the original audio and split based on the start and end time of each utterance. Auto transcriptions are more likely to be ungrammatical if the recording quality is bad or the conversation contains words that ASR cannot identify or the speakers do not express themselves clearly. The ungrammatical transcriptions usually occur in customer parts and the frequency of ungrammaticality varies from case to case. Although the sentiment recognition of a whole call tends to be robust with respect to speech recognition errors, the sensitivity of each utterance analysis to ASR errors is not reparable given our study. The speaker labels are from ASD output which can be misclassified because of the occurrence of speakers overlapping or speakers with similar acoustic features. Conversation sentiment pattern study can be misleading due to the misclassified ASD output, although misclassified ASD is rare.",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 268,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Voice Activity Detection",
"sec_num": null
},
{
"text": "This process allows us to study features from both modalities: transcribed words and acoustics. Distinguishing different parties gives us the ability to study the temporal sentiment transitions of individ-ual speakers and interactions among speakers in a conversation. However, since the data preparation is part of the pipeline described in section 3.4, which runs in real-time, sentiment analysis must rely on error-prone ASR and ASD outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Activity Detection",
"sec_num": null
},
{
"text": "Sentiment annotation is a challenging task as the label depends on the annotators' perspective, and the differences inherent in the way people express emotions. The sentiment is opinion-based, not factbased. This study aims at identifying negative expressions in calls, especially angry customers who are not satisfied with the services, or the business rules, or the systems of rules. By identifying and studying these types of cases, the business can improve call center services and fix the possible business or system issues. Guidelines are set up for the annotation. The customer negative tag is for negative emotions (e.g. \"I hate the system\"), attitudes (e.g. \"I am not following you\"), evaluations (e.g. \"your service is the worst\"), and negative facts caused by other parties (e.g. \"I never received my card\"). Other negative facts are not considered as negative (e.g. \"My wife died, I need to remove her from my dependents\"). The guidelines for CSRs are different. Well trained CSRs usually do not respond negatively, but there are cases that they cannot help the customers. We identify these cases as negative. Cases where a CSR cannot help the customer usually involve business process or system issues. The sentiment is not always explicit in the text. Borderline linguistic utterances stated loudly and quickly are usually identified as negative (e.g. the utterance \"Trust me, it could be done\" is classified as negative, since it is in the context that the representative fails to help the customer to enroll in the health plan, and in the audio, the customer is irritated). In all the multimodal sentiment analysis, the labels of all modalities are kept consistent for the same utterance. In our data annotation process, we also keep both text and audio labels that agree with each other and the annotation is based on the audio segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Annotation",
"sec_num": "3.3"
},
{
"text": "To successfully run and train analytical models, massive quantities of stored data are needed. Creating large annotated datasets can be a very time consuming and labor-intensive process. To keep If one call has a long duration (T > 10 minutes) and a high percentage of negative utterances based on D U (> 40% for customer or > 20% for CSR), then we say this call is potentially negative and informative. We then ask an annotator to manually correct the annotated tags D U by listening to the call, and move the results D U (I) to D L . For all the other calls, we only keep the utterances where classifiers all agree as D U (M ). We then remove chunks that are too short (duration < 1s) or too long (duration >20s). Finally, we discard chunks where the annotator cannot discern classification. Using the pipeline, 6,565 negative and 10,322 nonnegative call clips are annotated as the training dataset. The training data D LT still include transcription errors, even though the threshold discussed in the above paragraph is set to eliminate these utterances to add to the training dataset. In addition, 18,705 cleaned text chat data collected from chat windows are also added to D LT via the annotation pipeline to improve the C T accuracy. Instead of checking fusion with certainty, we only keep the utterances with classifiers in C T all agree as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning Annotation Pipeline",
"sec_num": "3.4"
},
{
"text": "Database Committee classifiers C T and C A Automatic Annotation D L ={D LT , D LA } Data Preparation Human Correction Accept Machine Label D U ' (I) D U ' (M) Yes No Fusion with Certainty D U ={D UT , D UA } D' U ={D' UT , D' UA }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning Annotation Pipeline",
"sec_num": "3.4"
},
{
"text": "D U (M ) = D U T (M ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning Annotation Pipeline",
"sec_num": "3.4"
},
{
"text": "Because of the quality of the calls, the poor performance of the ASR for some cases, and the threshold used to annotate the utterances, more than half of the original call segments are discarded * , and 18,705 text chat data are added to D LT ={transcription data, chat data} without the corresponding audio files in D LA . It is hard to consider the context of the conversation since the segments are not continuing in the training dataset. Therefore, conversation models are not considered in our committee classifiers C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning Annotation Pipeline",
"sec_num": "3.4"
},
{
"text": "To model information for sentiment analysis from calls, we first obtain the streams corresponding to each modality via the methods described in Section 3.2, followed by the extraction of a representative set of features for each modality. These features are then used as cues to build classifiers of binary sentiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bimodal Sentiment Analysis",
"sec_num": "4"
},
{
"text": "General approaches such as sentiment lexicons and sentiment APIs are easy to apply. Both approaches are employed in C T to monitor the utterance prediction labels in the early stage of semi-supervised learning annotation to extend training data. VADER (Hutto and Gilbert, 2015) is a simple rulebased model for general sentiment analysis. The results have four categories: compound, negative, neutral, and positive. We classify utterances with negative output as negative, neutral and positive as nonnegative \u2020 so that it is consistent with BSCD annotation. This model has many advantages, such as being less computationally expensive and easily interpretable. However, one of the main issues with only using lexicons is that most utterances do not contain polarized words. The utterances without polarized words are usually classified as neutral or nonnegative \u2021 . Sentiment analysis API is another way to classify sentiment without extra training data. Amazon offers Sentiment Analysis in Amazon Comprehend (AWSSA), which uses machine learning to find insights and relationships in a text. The result returns Mixed, Negative, Neutral, or Positive classification. To be consistent with the BSCD we created, Neutral and Positive are combined as one class: nonnegative \u2020 . Another sentiment analysis on Google Cloud Natural Language API (GoogleSA) also performs sentiment analysis on text. Sentiment analysis attempts to determine the overall attitude and is represented by numerical scores and magnitude values. We simply set utterances with negative scores as negative and nonnegative otherwise. For machine learning-oriented techniques by linguistic features, we evaluated well-known SVM, LSTM, and BLSTM models. Since the data is unbalanced and we want the model to focus more on the negative class, we apply weighted loss functions during the training. Hyperparameters are tuned for each model, and ensemble models are also developed by taking the weighted majority vote.",
"cite_spans": [
{
"start": 252,
"end": 277,
"text": "(Hutto and Gilbert, 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Textual Data",
"sec_num": "4.1"
},
{
"text": "Feature engineering heavily relies on expert knowledge about data features. To better understand the human hearing process, we study the acoustic features based on human perception. Three perceptual categories are described in this section. Their corresponding features are usually short-term based features that are extracted from every short-term window (or frame). Long-term features can be generated by aggregating the short-term features extracted from several consecutive frames within a time window. For each short-term acoustic feature, we calculated nine statistical aggregations: mean, standard deviation, quantiles (5%, 25%, 50%, 75%, 95%), range (95%-5% quantile), and interquartile range (75%-25% quantile) to get the long-term features of each segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Acoustic Data",
"sec_num": "4.2"
},
{
"text": "\u2022 Loudness is the subjective perception of sound pressure which is related to sound intensity. Amplitude and mean frequency spectrum features are extracted to measure loudness. The greater the amplitude of the vibrations, the greater the amount of energy carried by the wave, and the more intense the sound will be.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Acoustic Data",
"sec_num": "4.2"
},
{
"text": "\u2022 Sharpness is a measure of the high-frequency content of a sound, the greater the proportion of high frequencies the sharper the sound. Fundamental frequency (pitch) and dominant frequency are extracted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Acoustic Data",
"sec_num": "4.2"
},
{
"text": "\u2022 Speaking rate is normally defined as the number of words spoken per minute. In general, the speaking rate is characterized by different parameters of speech such as pause and vowel durations. In our study, speaking rate is measured by pause duration, character per second (CPS), and word per second (WPS) which are calculated as following for the ith segment:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Acoustic Data",
"sec_num": "4.2"
},
{
"text": "Pause duration i = T silence i T total i CPS i = N character i T total i , WPS i = N word i T total i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Acoustic Data",
"sec_num": "4.2"
},
{
"text": "where for segment i, T i denotes the time, and N i denotes the number of characters or words in the corresponding transcription. Pause duration can be interpreted as the percentage of the time where the speaker is silent in each segment. The three variables are aggregated statistics, long-term features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Acoustic Data",
"sec_num": "4.2"
},
{
"text": "In nonnegative cases, speakers are in a relaxed and normal emotional state. An agitated or angry emotional state speaker is typically characterized by increased vocal loudness, sharpness, and speaking rate. C A ={Elastic-Net, KNN, RF, GMM} are built based on the 39 selected features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Acoustic Data",
"sec_num": "4.2"
},
{
"text": "Hand-crafted features are generally very successful for specificity sound analysis tasks. One of the main drawbacks of feature engineering is that it relies on transformations that are defined beforehand and ignore some particularities of the signals observed at runtime such as recording conditions and recording devices. A more common approach is to select and adapt features initially introduced for other tasks. A now well-established example of this trend is the popularity of MFCC features (Serizel et al., 2018) . In our experiments, MFCC is extracted from each segment and fed to RNN models in later iterations with |D LA | > 10, 000.",
"cite_spans": [
{
"start": 496,
"end": 518,
"text": "(Serizel et al., 2018)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis of Acoustic Data",
"sec_num": "4.2"
},
{
"text": "There are two main fusion techniques: feature-level fusion and decision-level fusion. In our experiments, we employ decision-level fusion. Decisionlevel fusion has many advantages (Poria et al., 2015) . One benefit of the decision-level fusion is we can use classifiers for text and audio features separately. The unimodal classifier can use data from another communication channel of the same type to improve its accuracy, e.g. text data from the chat windows is borrowed to improve the C T accuracy in our study. Separating modalities permit us to use any learner suitable for the particular problem at hand. In practice, the two unimodal classifiers can be applied separately, e.g. to analyze text data from chat windows D U = D U T , apply C T only to get sentiment labels D U T , then add",
"cite_spans": [
{
"start": 180,
"end": 200,
"text": "(Poria et al., 2015)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion",
"sec_num": "5"
},
{
"text": "D U T (M ) to D LT .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion",
"sec_num": "5"
},
{
"text": "Another benefit of the decisionlevel fusion is its processing speed since fewer features are used for each classifier and separate classifiers can be run in parallel. Decision-level fusion usually adds probabilities or summarized predictions from each unimodal classifier with weights or takes the majority voting among the predicted class labels by unimodal classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion",
"sec_num": "5"
},
{
"text": "In this paper, various fusion methods are evaluated, including two novel approaches that use linguistic ensemble results as the baseline, while then checking acoustic results to modify classification decisions. In Fus1, if the audio ensemble classifies negative and one or more text models classifies negative, we then reclassify the result to negative. In Fus2, if the audio ensemble classifies a sample as negative, we then reclassify the result to negative directly without checking the linguistic modality. The Fus1 and Fus2 approaches are proposed, because for borderline linguistic utterances, acoustic features are more important than linguistic features to understand the spoken intention of the speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fusion",
"sec_num": "5"
},
{
"text": "The test dataset consists of 21 calls with 1,890 utterances, which are manually annotated for negative (848) and nonnegative (1,042).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "6"
},
{
"text": "As evaluation measures, we rely on accuracy and weighted F1-score, which is the weighted harmonic mean of precision and recall. Precision is the probability of returning values that are correct. Recall, also known as sensitivity is the probability of relevant values that the algorithm outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "6.1"
},
{
"text": "As shown in Table 1 , general approaches in C T , Vader and APIs, tend to have a low negative recall. The semantic knowledge based classifiers have more than 20% higher weighted F1-score than the general approaches. The classifiers are trained on D LT ={transcription data, chat data}. The overall weighted F1-score is more than 10% higher than the classifiers trained on call transcription only data \u00a7 . BLSTM on MFCC performs better than C A = {Elastic-Net (penalty 0.2||\u03b2|| 1 + 0.4||\u03b2|| 2 2 ), KNN (k = 3), RF, GMM} on acoustic features. Using audio features alone, a weighted F1-score of 0.584 Table 2 : Binary classification of sentiment polarity on both linguistic and acoustic modalities can be reached, which is acceptable considering that the real world audio-only system exclusively analyzes the tone of the speaker's voice and doesn't consider any language information. The acoustic modality is significantly weaker than the linguistic modality. Usually, speakers' tones are not signifcantly different from the tones under normal emotional state even the content is negative (e.g. \"We messed up.\" with negative tag ). 97% of the segments with correct D U T but wrong D U A have negative as true tag. The other 3% are the nonnegate segments with emphasized words (e.g. \" But I do have a newborn coming.\" with nonnegtive tag). In most cases, text already includes enough information to judge the sentiment. A few observed typical situations leading to linguistic modality misclassification are the presence of misleading linguistic cues caused by overlapping or other issues (e.g. ASR \"Customer: I love it. It can be done.\" and true transcription \"CSR: I... Customer: Drop it. It can be done.\" with negative tag), ambiguous linguistic utterances whose sentiment polarity are highly dependent on the context described in earlier or later part of the call (e.g. \"But I got a call from your service center today apologizing, saying, Yeah, we made a mistake.\" with nonnegative tag), or nonnegative linguistic utterances stated angrily (e.g. \"So I think you should honor those amounts.\" with negative tag).",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 598,
"end": 605,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "6.1"
},
{
"text": "In order to achieve better accuracy, we combine the two modalities together to exploit complementary information. We simply combine results of the three semantic knowledge based classifiers and all the five audio classifiers by taking the weighted majority vote. The T+A ensemble results are shown in Table 2 and they do not improve when compared to the unimodal text ensemble results. Since the unimodal performance of linguistic modality is notably better than acoustic modality, our decision-level fusion methods use linguistic ensemble results as the base-line, while acoustic results are used as supplemental information to calibrate each classification. Fus1 reclassifies the ambiguous linguistic utterances, and Fus2 reclassifies the nonnegative/ambiguous linguistic utterances based on audio ensemble classifies. The two novel fusion approaches discussed in Section 5 are tested. The Fus2 bimodal system yields a 2% improvement in weighted F1-score than the text unimodal system. McNemar's test is applied to compare the accuracy of text only results D U T and Fus2 results D U F 2 \u03c7 2 = (14 \u2212 52) 2 14 + 52 = 21.88,",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 308,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "6.1"
},
{
"text": "where the number of segments with correct D U T wrong D U F 2 is 14, and wrong D U T correct D U F 2 is 52. The McNemar's test gives \u03c7 2 = 21.88 and P < 0.001, which implies a statistically significant effect by adding acoustic features using the Fus2 approach. The acoustic modality provides important cues to identify borderline linguistic segments with negative emotions. Our results show that relying on the joint use of linguistic and acoustic modalities allows us to better sense the sentiment being expressed as compared to the use of only one modality at a time. The acoustic feature analysis helps us to better understand the spoken intention of the speaker, which is not normally expressed through text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "6.1"
},
{
"text": "The sentiment is not only regarded as an internal psychological phenomena but also interpreted and processed communicatively through social interactions. Conversations exemplify such a scenario where inter-personal sentiment influences persist. The left panel in Figure 3 shows the negative scores of customers and CSRs in 21 test calls. The negative score, a weighted negative segment percentage, is calculated to analyze the overall sentiment. Weights 0.8, 1, and 1.2 are assigned to the first third, second third and last third of each call. Since long pauses in calls are discarded in the data preparation process, these segments do not have sentiment labels and do not contribute to the negative score. The negative scores of CRSs are commonly lower than customers', and usually high negative scores for customers correspond to high negative scores for CSRs. We can conclude from the figure that sentiment can be affected by other parties during a conversation.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 271,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Tempo Sentiment Pattern",
"sec_num": "6.2"
},
{
"text": "To further analyze the interactions between customers and CSRs, the cumulative negative scores for call 6, 15, and 16 are drawn on the right panel of Figure 3 . The x-axis shows time of the whole call in seconds including noise and long pauses. Call 6 shows the sentiment patterns of a typical bad call, which is characterized by long duration and long pauses. The two long pauses are from 444s to 607s and from 921s to 1008s. Between the two long pauses, there are three customer and CSR overlapping segments, but the Automatic Speaker Diarization recognizes all of them as CSRs. The customer has a high negative score from beginning to end, and the CSR fails to help the customer during the call. Call 15 is a typical good call. The overall negative score is low and the negative score pattern goes down for both the customer and the CSR, which means the problem is resolved by the end of the call. Call 16 is another type of call, in which the customer does not get angry even though the CSR is unable to solve his/her issues.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Tempo Sentiment Pattern",
"sec_num": "6.2"
},
{
"text": "A new dataset BSCD consisting of real-world conversation, the service calls, is introduced. Human communication is a dynamic process, and our eventual goal is to develop a bimodal sentiment analysis engine with the ability to learn the temporal interaction sentiment patterns among conversation parties. In the process of fusion, we have approached the study of audio sentiment analysis from an angle that is somewhat different from most people's. Future research will concentrate on evaluations using larger data sets, exploring more acoustic feature relevance analysis, and striving to improve the decision-level fusion process. A call is constituent of a group of utterances that have contextual dependencies among them. However, in our semi-supervised learning annotation pipeline, about half of the segments in calls are discarded. Therefore the interdependent modeling is out of the scope of this paper and we include it as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "7"
},
{
"text": "* The accuracy on the test data decreases by 8% when including all the call segments in the training dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\u2020 Utterances with compound or mixed class are very few, and they are discarded to keep the training data clear.\u2021 This conclusion is verified by the high Rec(+) and low Rec(-) shown in table 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\u00a7 Weighted F1-scores are 0.718 (SVM), 0.719 (LSTM) and 0.714 (BLSTM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author wishes to express sincere appreciation to Sony SungChu for the support of the project and the comments that greatly improved the manuscript, and the anonymous reviewers for their insightful suggestions to help clarify the manuscript.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multimodal and multi-view models for emotion recognition",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Viktor",
"middle": [],
"last": "Rozgic",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.10198"
]
},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar, Viktor Rozgic, Weiran Wang, and Chao Wang. 2019. Multimodal and multi-view models for emotion recognition. arXiv:1906.10198. Version 1.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Arabic SentiWordNet in relation to SentiWordNet 3.0",
"authors": [],
"year": null,
"venue": "International Journal of Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arabic SentiWordNet in relation to SentiWordNet 3.0. International Journal of Computational Linguistics, 4:1-11.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "IEMOCAP: Interactive emotional dyadic motion capture database",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Busso",
"suffix": ""
},
{
"first": "Murtaza",
"middle": [],
"last": "Bulut",
"suffix": ""
},
{
"first": "Chi-Chun",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Kazemzadeh",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"Mower"
],
"last": "Provost",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jeannette",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sungbok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2008,
"venue": "Language Resources and Evaluation",
"volume": "42",
"issue": "",
"pages": "335--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower Provost, Samuel Kim, Jeannette Chang, Sungbok Lee, and Shrikanth Narayanan. 2008. IEMOCAP: Interactive emotional dyadic motion capture database. Language Resources and Evaluation, 42:335-359.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "AffectiveSpace 2: Enabling affective intuition for concept-level sentiment analysis",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Federica",
"middle": [],
"last": "Bisio",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. AAAI",
"volume": "",
"issue": "",
"pages": "508--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Cambria, J. Fu, Federica Bisio, and Soujanya Po- ria. 2015. AffectiveSpace 2: Enabling affective intu- ition for concept-level sentiment analysis. Proc. AAAI, pages 508-514.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Benchmarking multimodal sentiment analysis",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.09538"
]
},
"num": null,
"urls": [],
"raw_text": "Benchmarking multimodal sentiment analysis. arXiv:1707.09538. Version 1.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SenticNet: A publicly available semantic resource for opinion mining",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Havasi",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
}
],
"year": 2010,
"venue": "AAAI Fall Symposium -Technical Report",
"volume": "",
"issue": "",
"pages": "14--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Cambria, Robyn Speer, C. Havasi, and Amir Hus- sain. 2010. SenticNet: A publicly available semantic resource for opinion mining. AAAI Fall Symposium - Technical Report, pages 14-18.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The AMI meeting corpus: A pre-announcement",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Ashby",
"suffix": ""
},
{
"first": "Sebastien",
"middle": [],
"last": "Bourban",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Flynn",
"suffix": ""
},
{
"first": "Mael",
"middle": [],
"last": "Guillemot",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hain",
"suffix": ""
},
{
"first": "Jaroslav",
"middle": [],
"last": "Kadlec",
"suffix": ""
},
{
"first": "Vasilis",
"middle": [],
"last": "Karaiskos",
"suffix": ""
},
{
"first": "Wessel",
"middle": [],
"last": "Kraaij",
"suffix": ""
},
{
"first": "Melissa",
"middle": [],
"last": "Kronenthal",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lathoud",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lincoln",
"suffix": ""
},
{
"first": "Agnes",
"middle": [],
"last": "Lisowska",
"suffix": ""
},
{
"first": "Iain",
"middle": [],
"last": "Mccowan",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, Guillaume Lathoud, Mike Lincoln, Agnes Lisowska, Iain McCowan, Wilfried Post, Dennis Reidsma, and Pierre Wellner. 2006. The AMI meeting corpus: A pre-announcement. In Proceedings of the Second In- ternational Conference on Machine Learning for Mul- timodal Interaction, pages 28-39, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "openSMILE -the munich versatile and fast open-source audio feature extractor",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Eyben",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "W\u00f6llmer",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2010,
"venue": "MM'10 -Proceedings of the ACM Multimedia 2010 International Conference",
"volume": "",
"issue": "",
"pages": "1459--1462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Eyben, Martin W\u00f6llmer, and Bj\u00f6rn Schuller. 2010. openSMILE -the munich versatile and fast open-source audio feature extractor. MM'10 -Proceed- ings of the ACM Multimedia 2010 International Con- ference, pages 1459-1462.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "openEAR -introducing the munich opensource emotion and affect recognition toolkit. Affective Computing and Intelligent Interaction and Workshops",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Eyben",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wllmer",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2009,
"venue": "ACII 2009. 3rd International Conference",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Eyben, Martin Wllmer, and Bj\u00f6rn Schuller. 2009. openEAR -introducing the munich open- source emotion and affect recognition toolkit. Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference, pages 1 -6.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Twitter sentiment classification using distant supervision",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Go",
"suffix": ""
},
{
"first": "Richa",
"middle": [],
"last": "Bhayani",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter sentiment classification using distant supervision. CS224N Project Report, 150.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ICON: Interactive conversational memory network for multimodal emotion detection",
"authors": [
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Zimmermann",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "2594--2604",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1280"
]
},
"num": null,
"urls": [],
"raw_text": "Devamanyu Hazarika, Soujanya Poria, Rada Mihalcea, Erik Cambria, and Roger Zimmermann. 2018. ICON: Interactive conversational memory network for multi- modal emotion detection. pages 2594-2604.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "VADER: A parsimonious rule-based model for sentiment analysis of social media text",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Hutto",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Conference on Weblogs and Social Media, ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.J. Hutto and Eric Gilbert. 2015. VADER: A parsimo- nious rule-based model for sentiment analysis of social media text. Proceedings of the 8th International Con- ference on Weblogs and Social Media, ICWSM 2014.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Speech emotion recognition using support vector machine",
"authors": [
{
"first": "Manas",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Shruthi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Pratibha",
"middle": [],
"last": "Balaji",
"suffix": ""
},
{
"first": "Abhijit",
"middle": [],
"last": "Bhowmick",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Muthu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manas Jain, Shruthi Narayan, Pratibha Balaji, Abhijit Bhowmick, and Rajesh Muthu. 2018. Speech emotion recognition using support vector machine.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Speakerindependent emotion recognition exploiting a psychologically-inspired binary cascade classification schema",
"authors": [
{
"first": "Margarita",
"middle": [],
"last": "Kotti",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Patern\u00f2",
"suffix": ""
}
],
"year": 2012,
"venue": "Int J Speech Technol",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margarita Kotti and Fabio Patern\u00f2. 2012. Speaker- independent emotion recognition exploiting a psychologically-inspired binary cascade classifica- tion schema. Int J Speech Technol, 15.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sentiment analysis and subjectivity. Handbook of Natural Language Processing",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu. 2010. Sentiment analysis and subjectivity. Handbook of Natural Language Processing, Second Edition.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Sentiment analysis and opinion mining",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu. 2012. Sentiment analysis and opinion min- ing. In Synthesis Lectures on Human Language Tech- nologies.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "DialogueRNN: An attentive RNN for emotion detection in conversations",
"authors": [
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Devamanyu",
"middle": [],
"last": "Hazarika",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "6818--6825",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33016818"
]
},
"num": null,
"urls": [],
"raw_text": "Navonil Majumder, Soujanya Poria, Devamanyu Haz- arika, Rada Mihalcea, Alexander Gelbukh, and Erik Cambria. 2019. DialogueRNN: An attentive RNN for emotion detection in conversations. Proceedings of the AAAI Conference on Artificial Intelligence, 33:6818- 6825.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "jAudio: An feature extraction library",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Mcennis",
"suffix": ""
},
{
"first": "Cory",
"middle": [],
"last": "Mckay",
"suffix": ""
},
{
"first": "Ichiro",
"middle": [],
"last": "Fujinaga",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Depalle",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the International Conference on Music Information Retrieval",
"volume": "",
"issue": "",
"pages": "600--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel McEnnis, Cory McKay, Ichiro Fujinaga, and Philippe Depalle. 2005. jAudio: An feature extraction library. Proceedings of the International Conference on Music Information Retrieval, pages 600-603.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "librosa: Audio and music signal analysis in python",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Mcfee",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Dawen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Mcvicar",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Battenberg",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Nieto",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 14th Python in Science Conference",
"volume": "8",
"issue": "",
"pages": "18--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian McFee, Colin Raffel, Dawen Liang, Daniel Ellis, Matt Mcvicar, Eric Battenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis in python. in Proceedings of the 14th Python in Science Conference, 8:18-24.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The SEMAINE corpus of emotionally coloured character interactions",
"authors": [
{
"first": "Gary",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Valstar",
"suffix": ""
},
{
"first": "Roddy",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Pantic",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE International Conference on Multimedia and Expo, ICME 2010",
"volume": "",
"issue": "",
"pages": "1079--1084",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gary Mckeown, Michel Valstar, Roddy Cowie, and Maja Pantic. 2010. The SEMAINE corpus of emotion- ally coloured character interactions. 2010 IEEE Inter- national Conference on Multimedia and Expo, ICME 2010, pages 1079-1084.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The SEMAINE database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent",
"authors": [
{
"first": "Gary",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Valstar",
"suffix": ""
},
{
"first": "Roddy",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Pantic",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Schroder",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Transactions on",
"volume": "3",
"issue": "",
"pages": "5--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gary Mckeown, Michel Valstar, Roddy Cowie, Maja Pantic, and M. Schroder. 2013. The SEMAINE database: Annotated multimodal records of emotion- ally colored conversations between a person and a lim- ited agent. Affective Computing, IEEE Transactions on, 3:5-17.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A new ANEW: Evaluation of a word list for sentiment analysis in microblogs",
"authors": [
{
"first": "Finn",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finn Nielsen. 2011. A new ANEW: Evaluation of a word list for sentiment analysis in microblogs. CoRR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bootstrapping named entity annotation by means of active machine learning: a method for creating corpora",
"authors": [
{
"first": "Fredrik",
"middle": [],
"last": "Olsson",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fredrik Olsson. 2008. Bootstrapping named entity annotation by means of active machine learning: a method for creating corpora.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Speech emotion recognition using support vector machine",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2012,
"venue": "Int. J. Smart Home",
"volume": "6",
"issue": "",
"pages": "101--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y Pan, P Shen, and L Shen. 2012. Speech emotion recognition using support vector machine. Int. J. Smart Home, 6:101-108.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Computing Research Repository -CORR",
"volume": "",
"issue": "",
"pages": "271--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2004a. A sentimental edu- cation: Sentiment analysis using subjectivity summa- rization based on minimum cuts. Computing Research Repository -CORR, 271-278:271-278.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)",
"volume": "",
"issue": "",
"pages": "271--278",
"other_ids": {
"DOI": [
"10.3115/1218955.1218990"
]
},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2004b. A sentimental ed- ucation: Sentiment analysis using subjectivity sum- marization based on minimum cuts. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271-278, Barcelona, Spain.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The development and psychometric properties of LIWC2007",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pennebaker",
"suffix": ""
},
{
"first": "Cindy",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Molly",
"middle": [],
"last": "Ireland",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Gonzales",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Booth",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pennebaker, Cindy Chung, Molly Ireland, Amy Gonzales, and Roger Booth. 2007. The development and psychometric properties of LIWC2007.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Linguistic inquiry and word count (LIWC)",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pennebaker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Booth",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pennebaker, M. Francis, and R. Booth. 2001. Linguistic inquiry and word count (LIWC): LIWC2001. 71.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Fusing audio, visual and textual clues for sentiment analysis from multimodal content",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": 2016,
"venue": "Neurocomputing",
"volume": "174",
"issue": "",
"pages": "50--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria. 2016. Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing, 174:50-59.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis",
"authors": [],
"year": null,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2539--2544",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1303"
]
},
"num": null,
"urls": [],
"raw_text": "Deep convolutional neural network textual fea- tures and multiple kernel learning for utterance-level multimodal sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2539-2544, Lisbon, Por- tugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Towards an intelligent framework for multimodal affective data analysis. Neural Networks",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
},
{
"first": "Guang-Bin",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Erik Cambria, Amir Hussain, and Guang-Bin Huang. 2014a. Towards an intelligent framework for multimodal affective data analysis. Neu- ral Networks, 63.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "EmoSenticSpace: A novel framework for affective common-sense reasoning. Knowledge-Based Systems",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
},
{
"first": "Guang-Bin",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Alexander Gelbukh, Erik Cam- bria, Amir Hussain, and Guang-Bin Huang. 2014b. EmoSenticSpace: A novel framework for affective common-sense reasoning. Knowledge-Based Systems, 69.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Emotion recognition in conversation: Research challenges, datasets, and recent advances",
"authors": [
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.02947"
]
},
"num": null,
"urls": [],
"raw_text": "Soujanya Poria, Navonil Majumder, Rada Mihalcea, and Eduard Hovy. 2019. Emotion recognition in con- versation: Research challenges, datasets, and recent ad- vances. arXiv:1905.02947. Version 1.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Deep convolutional neural networks for sentiment analysis of short texts",
"authors": [
{
"first": "Santos",
"middle": [],
"last": "C\u00edcero Dos",
"suffix": ""
},
{
"first": "Ma\u00edra",
"middle": [],
"last": "Gatti",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "69--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00edcero dos Santos and Ma\u00edra Gatti. 2014. Deep convo- lutional neural networks for sentiment analysis of short texts. In Proceedings of COLING 2014, the 25th In- ternational Conference on Computational Linguistics: Technical Papers, pages 69-78, Dublin, Ireland. Dublin City University and Association for Computational Lin- guistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "AVEC 2012 -the continuous audio/visual emotion challenge",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Schuller",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Valstar",
"suffix": ""
},
{
"first": "Roddy",
"middle": [],
"last": "Cowie",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Pantic",
"suffix": ""
}
],
"year": 2012,
"venue": "ICMI'12 -Proceedings of the ACM International Conference on Multimodal Interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Schuller, Michel Valstar, Roddy Cowie, and Maja Pantic. 2012. AVEC 2012 -the continuous au- dio/visual emotion challenge. ICMI'12 -Proceedings of the ACM International Conference on Multimodal Interaction.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Acoustic features for environmental sound analysis",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Serizel",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Bisot",
"suffix": ""
},
{
"first": "Slim",
"middle": [],
"last": "Essid",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Richard",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Analysis of Sound Scenes and Events",
"volume": "",
"issue": "",
"pages": "71--101",
"other_ids": {
"DOI": [
"10.1007/978-3-319-63450-0_4"
]
},
"num": null,
"urls": [],
"raw_text": "Romain Serizel, Victor Bisot, Slim Essid, and Ga\u00ebl Richard. 2018. Acoustic features for environmental sound analysis. Computational Analysis of Sound Scenes and Events, pages 71-101.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Beyond facial expressions: Learning human emotion from body gestures",
"authors": [
{
"first": "Caifeng",
"middle": [],
"last": "Shan",
"suffix": ""
},
{
"first": "Shaogang",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"W"
],
"last": "Mcowan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. British Machine Vision Conf",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caifeng Shan, Shaogang Gong, and Peter W. Mcowan. 2007. Beyond facial expressions: Learning human emotion from body gestures. In in Proc. British Ma- chine Vision Conf.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [
"Y"
],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "1631",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew .Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. EMNLP, 1631:1631-1642.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The General Inquirer: A Computer Approach to Content Analysis",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Stone",
"suffix": ""
},
{
"first": "Dexter",
"middle": [],
"last": "Dunphy",
"suffix": ""
},
{
"first": "Marshall",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Ogilvie",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Stone, Dexter Dunphy, Marshall Smith, and Daniel Ogilvie. 1966. The General Inquirer: A Com- puter Approach to Content Analysis, volume 4.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "SemEval-2007 task 14: Affective text. Proceedings of the 4th International Workshop on the Semantic Evaluations",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Rada Mihalcea. 2007. SemEval- 2007 task 14: Affective text. Proceedings of the 4th International Workshop on the Semantic Evaluations (SemEval 2007).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Seewave: a free modular tool for sound analysis and synthesis",
"authors": [
{
"first": "Jerome",
"middle": [],
"last": "Sueur",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Aubin",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Simonis",
"suffix": ""
}
],
"year": 2008,
"venue": "Bioacoustics",
"volume": "18",
"issue": "",
"pages": "213--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerome Sueur, Thierry Aubin, and Caroline Simonis. 2008. Seewave: a free modular tool for sound analy- sis and synthesis. Bioacoustics, 18:213-226.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Attention-based LSTM for aspectlevel sentiment classification",
"authors": [
{
"first": "Yequan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "606--615",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1058"
]
},
"num": null,
"urls": [],
"raw_text": "Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect- level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 606-615, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Annotating expressions of opinions and emotions in language. Language Resources and Evaluation (formerly",
"authors": [
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2005,
"venue": "Computers and the Humanities)",
"volume": "39",
"issue": "",
"pages": "164--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language Resources and Evalua- tion (formerly Computers and the Humanities), 39:164- 210.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Recognizing contextual polarity in phrase-level sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. Proceedings of HLT/EMNLP.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Audio-visual affect recognition",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "T",
"middle": [
"S"
],
"last": "Huang",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pianfetti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Levinson",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Transactions on Multimedia",
"volume": "9",
"issue": "2",
"pages": "424--428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Zeng, J. Tu, M. Liu, T. S. Huang, B. Pianfetti, D. Roth, and S. Levinson. 2007. Audio-visual af- fect recognition. IEEE Transactions on Multimedia, 9(2):424-428.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Semi-supervised learning annotation pipeline the human annotation effort to a minimum, a semisupervised learning annotation scheme is applied to tag the polarity of utterances as negative, or nonnegative.Figure 2illustrates the process which is similar to active learning annotation. It takes as input a set of labeled examples D L including text D LT and audio D LA , as well as a larger set of unlabeled examples D U = {D U T , D U A }, and produces committee classifiers C = {C T , C A } and a relatively small set of newly labeled data D U (I) and D U (M )(Olsson, 2008). Semi-supervised learning annotation cooperates with humans and machines and combines both semi-supervised learning and multiple classifiers approach for corpus annotation. This pipeline consists of several steps: data generation to obtain D U (Section 3.2), model training for both modalities to obtain C T and C A using D LT and D LA (Section 4), model deployment to get machine label D U = {D U T , D U A }, model fusion (Section 5) and results evaluation to decide whether to accept machine label D U (M ) or ask a human annotator for classifications of the utterances to obtain DU (I), then move D U (I) and D U (M ) from D U to D L .It is cyclical and iterative as every step is repeated to continuously improve the accuracy of the classifier and achieve a successful algorithm. Note, the classifiers in committee C = {C T , C A } are modified based on D L in each iteration. The annotation process starts with 20 calls selected from the service center by human domain experts, 20 calls are chunked to 1410 segments via data preparation processing and annotated by three annotators manually as D L . For the first three iterations, set C T ={Support Vector Machine (SVM), VADER, AWSSA * , AWSCC \u2020 , GoogleSA \u2021 } requires a small size of training data or no extra training data. As the size of D LT increases, we form a new com- * AWS Comprehend Sentiment Analysis API \u2020 AWS Custom Classification API \u2021 Google Language Sentiment Analysis API mittee C T = {SVM, Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BLSTM)}. These classifiers are described in Section 4.1. Section 4.2 introduced C A = {Elastic-Net Regularized Generalized Linear Models (Elastic-Net), K-Nearest Neighbors (KNN), Random Forest (RF), Gaussian Mixture Model (unsupervised GMM) }. In the later iterations, Recurrent Neural Networks (RNN) such as LSTM and BLSTM are applied.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "+) 0.953 0.979 0.894 0.950 0.933 Rec. (-) 0.761 0.240 0.804 0.777 0.817",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "The (cumulative) negative score pattern between customers (C) and CSRs (R)",
"uris": null
},
"TABREF2": {
"html": null,
"text": "",
"content": "<table><tr><td colspan=\"2\">: Binary classification of sentiment polarity on test data: Accuracy (Acc.), weighted F1-score (F1 (w)),</td></tr><tr><td colspan=\"2\">precision (Prec.) and recall (Rec.) for the nonnegative (+) and negative (-) classes</td></tr><tr><td>Methods</td><td>Ensemble Text Audio T+A Fus1 Fus2 Fusion</td></tr><tr><td>Acc.</td><td>0.851 0.586 0.846 0.858 0.871</td></tr><tr><td>F1 (w)</td><td>0.851 0.525 0.846 0.858 0.</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}