{ "paper_id": "C16-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:00:15.801739Z" }, "title": "Learning to Distill: The Essence Vector Modeling Framework", "authors": [ { "first": "Kuan-Yu", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "country": "Taiwan" } }, "email": "kychen@iis.sinica.edu.tw" }, { "first": "Shih-Hung", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "country": "Taiwan" } }, "email": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "country": "Taiwan" } }, "email": "berlin@csie.ntnu.edu.tw" }, { "first": "Hsin-Min", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "country": "Taiwan" } }, "email": "" }, { "first": "Academia", "middle": [], "last": "Sinica", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "country": "Taiwan" } }, "email": "" }, { "first": "", "middle": [], "last": "Taiwan", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": { "country": "Taiwan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In the context of natural language processing, representation learning has emerged as a newly active research subject because of its excellent performance in many applications. Learning representations of words is a pioneering study in this school of research. However, paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as sentiment classification and document summarization. Nevertheless, as far as we are aware, there is relatively less work focusing on the development of unsupervised paragraph embedding methods. Classic paragraph embedding methods infer the representation of a given paragraph by considering all of the words occurring in the paragraph. Consequently, those stop or function words that occur frequently may mislead the embedding learning process to produce a misty paragraph representation. Motivated by these observations, our major contributions in this paper are twofold. First, we propose a novel unsupervised paragraph embedding method, named the essence vector (EV) model, which aims at not only distilling the most representative information from a paragraph but also excluding the general background information to produce a more informative low-dimensional vector representation for the paragraph. We evaluate the proposed EV model on benchmark sentiment classification and multi-document summarization tasks. The experimental results demonstrate the effectiveness and applicability of the proposed embedding method. Second, in view of the increasing importance of spoken content processing, an extension of the EV model, named the denoising essence vector (D-EV) model, is proposed. The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition. The utility of the D-EV model is evaluated on a spoken document summarization task, confirming the practical merits of the proposed embedding method in relation to several well-practiced and state-of-the-art summarization methods.", "pdf_parse": { "paper_id": "C16-1035", "_pdf_hash": "", "abstract": [ { "text": "In the context of natural language processing, representation learning has emerged as a newly active research subject because of its excellent performance in many applications. Learning representations of words is a pioneering study in this school of research. However, paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as sentiment classification and document summarization. Nevertheless, as far as we are aware, there is relatively less work focusing on the development of unsupervised paragraph embedding methods. Classic paragraph embedding methods infer the representation of a given paragraph by considering all of the words occurring in the paragraph. Consequently, those stop or function words that occur frequently may mislead the embedding learning process to produce a misty paragraph representation. Motivated by these observations, our major contributions in this paper are twofold. First, we propose a novel unsupervised paragraph embedding method, named the essence vector (EV) model, which aims at not only distilling the most representative information from a paragraph but also excluding the general background information to produce a more informative low-dimensional vector representation for the paragraph. We evaluate the proposed EV model on benchmark sentiment classification and multi-document summarization tasks. The experimental results demonstrate the effectiveness and applicability of the proposed embedding method. Second, in view of the increasing importance of spoken content processing, an extension of the EV model, named the denoising essence vector (D-EV) model, is proposed. The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition. The utility of the D-EV model is evaluated on a spoken document summarization task, confirming the practical merits of the proposed embedding method in relation to several well-practiced and state-of-the-art summarization methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Representation learning has gained significant interest of research and experimentation in many machine learning applications because of its remarkable performance. When it comes to the field of natural language processing (NLP), word embedding methods can be viewed as pioneering studies (Bengio et al., 2003; Mikolov et al., 2013; Pennington et al., 2014) . The central idea of these methods is to learn continuously distributed vector representations of words using neural networks, which seeks to probe latent semantic and/or syntactic cues that can in turn be used to induce similarity measures among words. A common thread of leveraging word embedding methods to NLP-related tasks is to represent a given paragraph (or sentence and document) by simply taking an average over the word embeddings corresponding to the words occurring in the paragraph. By doing so, this thread of methods has recently enjoyed substantial success in many NLP-related tasks (Collobert and Weston, 2008; Tang et al., 2014; Kageback et al., 2014) .", "cite_spans": [ { "start": 289, "end": 310, "text": "(Bengio et al., 2003;", "ref_id": "BIBREF0" }, { "start": 311, "end": 332, "text": "Mikolov et al., 2013;", "ref_id": "BIBREF22" }, { "start": 333, "end": 357, "text": "Pennington et al., 2014)", "ref_id": "BIBREF28" }, { "start": 959, "end": 987, "text": "(Collobert and Weston, 2008;", "ref_id": "BIBREF8" }, { "start": 988, "end": 1006, "text": "Tang et al., 2014;", "ref_id": "BIBREF31" }, { "start": 1007, "end": 1029, "text": "Kageback et al., 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the empirical effectiveness of word embedding methods has been proven recently, the composite representation for a paragraph (or sentence and document) is a bit queer. Theoretically, paragraph-based representation learning is expected to be more suitable for such tasks as information retrieval, sentiment analysis and document summarization (Huang et al., 2013; Le and Mikolov, 2014; Palangi et al., 2015) , to name but a few. However, to the best of our knowledge, unsupervised paragraph embedding has been largely under-explored on these tasks. Classic paragraph embedding methods infer the representation of a given paragraph by considering all of the words occurring in the paragraph. Consequently, those stop or function words that occur frequently in the paragraph may mislead the embedding learning process to produce a misty paragraph representation. In other words, the frequent words or modifiers may overshadow the indicative words, thereby drifting the main theme of the semantic content in the paragraph. As a result, the learned representation for the paragraph might be undesired. In order to address this shortcoming, we propose a novel unsupervised paragraph embedding method, named the essence vector (EV) model, which aims at not only distilling the most representative information from a paragraph but also excluding the general background information to produce a more informative and discriminative low-dimensional vector representation for the paragraph.", "cite_spans": [ { "start": 351, "end": 371, "text": "(Huang et al., 2013;", "ref_id": "BIBREF13" }, { "start": 372, "end": 393, "text": "Le and Mikolov, 2014;", "ref_id": "BIBREF17" }, { "start": 394, "end": 415, "text": "Palangi et al., 2015)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On a separate front, with the popularity of the Internet and the increasing development of the digital storage capacity, unprecedented volumes of multimedia information, such as broadcast news, lecture recordings, voice mails and video streams, among others, have been quickly disseminated around the world and shared among people. Consequently, spoken content processing has become an important and urgent demand (Lee and Chen, 2005; Ostendorf, 2008; Liu and Hakkani-Tur, 2011) . Obviously, speech is one of the most important sources of information about multimedia (Furui et al., 2012) . A common school of processing multimedia is to transcribe the associated spoken content into text or lattice format by an automatic speech recognizer. After that, well-developed text processing frameworks can then be readily applied. However, such imperfect transcripts usually limit the associated efficacy. To bridge the performance gap between perfect and imperfect transcripts, we hence extend the proposed essence vector model to a denoising essence vector (D-EV) model, which not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph that is more resilient to imperfect speech recognition.", "cite_spans": [ { "start": 414, "end": 434, "text": "(Lee and Chen, 2005;", "ref_id": "BIBREF25" }, { "start": 435, "end": 451, "text": "Ostendorf, 2008;", "ref_id": "BIBREF24" }, { "start": 452, "end": 478, "text": "Liu and Hakkani-Tur, 2011)", "ref_id": "BIBREF20" }, { "start": 568, "end": 588, "text": "(Furui et al., 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows. We first briefly review two classic paragraph embedding methods in Section 2. Section 3 sheds light on our proposed essence vector model and its extension, the denoising essence vector model. Then, a series of experiments are presented in Section 4 to evaluate the proposed representation learning methods. Finally, Section 5 concludes the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to the large body of work on developing various word embedding methods, there are relatively few studies concentrating on learning paragraph representations in an unsupervised manner (Huang et al., 2013; Le and Mikolov, 2014; Chen et al., 2014; Palangi et al., 2015) . Representative methods include the distributed memory model (Le and Mikolov, 2014) and the distributed bag-ofwords model (Le and Mikolov, 2014; Chen et al., 2014) .", "cite_spans": [ { "start": 195, "end": 215, "text": "(Huang et al., 2013;", "ref_id": "BIBREF13" }, { "start": 216, "end": 237, "text": "Le and Mikolov, 2014;", "ref_id": "BIBREF17" }, { "start": 238, "end": 256, "text": "Chen et al., 2014;", "ref_id": "BIBREF6" }, { "start": 257, "end": 278, "text": "Palangi et al., 2015)", "ref_id": "BIBREF26" }, { "start": 341, "end": 363, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF17" }, { "start": 402, "end": 424, "text": "(Le and Mikolov, 2014;", "ref_id": "BIBREF17" }, { "start": 425, "end": 443, "text": "Chen et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Literature Review", "sec_num": "2" }, { "text": "The distributed memory (DM) model is inspired and hybridized from the traditional feed-forward neural network language model (NNLM) (Bengio et al., 2003) and the recently proposed word embedding methods (Mikolov et al., 2013) . Formally, given a sequence of words, { 1 , 2 , \u22ef , }, the objective function of feed-forward NNLM is to maximize the total log-likelihood,", "cite_spans": [ { "start": 132, "end": 153, "text": "(Bengio et al., 2003)", "ref_id": "BIBREF0" }, { "start": 203, "end": 225, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "The Distributed Memory Model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 log ( | \u2212 +1 , \u22ef , \u22121 ) =1 .", "eq_num": "(1)" } ], "section": "The Distributed Memory Model", "sec_num": "2.1" }, { "text": "Obviously, NNLM is designed to predict the probability of a future word, given its \u2212 1 previous words. The input of NNLM is a high-dimensional vector, which is constructed by concatenating (or taking an average over) the word representations of all words within the context (i.e., \u2212 +1 , \u22ef , \u22121 ), and the output can be viewed as that of a multi-class classifier. By doing so, the -gram probability can be calculated through a softmax function at the output layer:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Distributed Memory Model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\ufffd \ufffd \u2212 +1 , \u22ef , \u22121 \ufffd = exp( ) \u2211 exp ( ) \u2208 ,", "eq_num": "(2)" } ], "section": "The Distributed Memory Model", "sec_num": "2.1" }, { "text": "where denotes the output value for word , and is the vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Distributed Memory Model", "sec_num": "2.1" }, { "text": "Based on the NNLM, the notion underlying the DM model is that a given paragraph also contributes to the prediction of the next word, given its previous words in the paragraph (Le and Mikolov, 2014) . To make the idea work, the training objective function is defined by", "cite_spans": [ { "start": 175, "end": 197, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "The Distributed Memory Model", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 \u2211 log ( | \u2212 +1 , \u22ef , \u22121 , ) =1 T =1 ,", "eq_num": "(3)" } ], "section": "The Distributed Memory Model", "sec_num": "2.1" }, { "text": "where T denotes the number of paragraphs in the training corpus, denotes the -th paragraph, and is the length of . Since the model acts as a memory unit that remembers what is missing from the current context, it is named the distributed memory (DM) model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Distributed Memory Model", "sec_num": "2.1" }, { "text": "Opposite to the DM model, a simplified version is to only rely on the paragraph representation to predict all of the words occurring in the paragraph (Le and Mikolov, 2014; Chen et al., 2014) . The training objective function can then be defined by maximizing the predictive probabilities all over the words occurring in the paragraph:", "cite_spans": [ { "start": 150, "end": 172, "text": "(Le and Mikolov, 2014;", "ref_id": "BIBREF17" }, { "start": 173, "end": 191, "text": "Chen et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "The Distributed Bag-of-Words Model", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 \u2211 log ( | ) =1 T =1 .", "eq_num": "(4)" } ], "section": "The Distributed Bag-of-Words Model", "sec_num": "2.2" }, { "text": "Since the simplified model ignores the contextual words at the input layer, the model is named the distributed bag-of-words (DBOW) model. In addition to being conceptually simple, the DBOW model only needs to store the softmax weights, whereas the DM model stores both softmax weights and word vectors (Le and Mikolov, 2014) .", "cite_spans": [ { "start": 302, "end": 324, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "The Distributed Bag-of-Words Model", "sec_num": "2.2" }, { "text": "3 Learning to Distill: The Proposed Essence Vector Modeling Framework", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Distributed Bag-of-Words Model", "sec_num": "2.2" }, { "text": "Classic paragraph embedding methods infer the representation of a paragraph by considering all of the words occurring in the paragraph. However, we all agree upon that the number of content words in a paragraph is usually less than that of stop or function words. Accordingly, those stop or function words may mislead the representation learning process to produce an ambiguous paragraph representation. In other words, the frequent words or modifiers may overshadow the indicative words, thereby making the learned representation deviate from the main theme of the semantic content expressed in the paragraph. Consequently, the associated capacity will be limited. In order to complement such deficiency, we hence strive to develop a novel unsupervised paragraph embedding method, which aims at not only distilling the most representative information from a paragraph but also diminishing the impact of the general background information (probably predominated by stop or function words), so as to deduce an informative and discriminative low-dimensional vector representation for the paragraph. We henceforth term this novel unsupervised paragraph embedding method the essence vector (EV) model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "To turn the idea into a reality, we begin with an assumption that each paragraph (or sentence and document) can be assembled by two components: the paragraph specific information and the general background information. This assumption also holds in the low-dimensional representation space. Accordingly, the proposed method consists of three modules: a paragraph encoder (\u2022), which can automatically infer the desired low-dimensional vector representation by considering only the paragraph-specific information; a background encoder (\u2022) , which is used to map the general background information into a low-dimensional representation; and a decoder \u210e(\u2022) that can reconstruct the original paragraph by combining the paragraph representation and the background representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "More formally, given a set of training paragraphs { 1 , \u22ef , , \u22ef , T }, in order to modulate the effect of different lengths of paragraphs, each paragraph is first represented by a bag-of-words highdimensional vector \u2208 \u211d | | , where each element corresponds to the frequency count of a word/term in the vocabulary , and the vector is normalized to unit-sum. Then, a paragraph encoder is applied to extract the most specific information from the paragraph and encapsulate it into a low-dimensional vector representation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\ufffd \ufffd = .", "eq_num": "(5)" } ], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "At the same time, the general background is also represented by a high-dimensional vector with normalized word/term frequency counts, \u2208 \u211d | | , and a background encoder is used to compress the general background information into a low-dimensional vector representation: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) = .", "eq_num": "(6" } ], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u210e\ufffd \u2022 + \ufffd1 \u2212 \ufffd \u2022 \ufffd = \u2032 ,", "eq_num": "(7)" } ], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "where \u210e(\u2022) is also a fully connected multilayer neural network with parameter \u210e , and the interpolation weight can be determined by an attention function (\u2022,\u2022):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= ( , ).", "eq_num": "(8)" } ], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "The attention function can be realized by a trainable network or a simple linear/non-linear function. Further, to ensure the quality of the learned background representation , it should also be mapped back to by \u210e(\u2022) appropriately:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u210e( ) = \u2032 .", "eq_num": "(9)" } ], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "In a nutshell, the training objective function of the proposed essence vector model is to minimize the total KL-divergence measure:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min , , \u210e \u2211 \ufffd log \u2032 + log \u2032 \ufffd T =1 .", "eq_num": "(10)" } ], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "The activation function used in the EV model is the hyperbolic tangent, except that the output layer in the decoder \u210e(\u2022) is the softmax (Goodfellow et al., 2016) , the cosine distance is used to calculate the ", "cite_spans": [ { "start": 136, "end": 161, "text": "(Goodfellow et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "The Essence Vector Model", "sec_num": "3.1" }, { "text": "Next, we turn to focus on learning representations for spoken paragraphs. In addition to the stop/function words and modifiers, the additional challenge facing spoken paragraph learning is the imperfect transcripts generated by automatic speech recognition. Therefore, our goal is not only to inherit the advantages of the EV model, but also to infer a more robust representation for a given spoken paragraph that withstands the errors of imperfect transcripts. The core idea is that the learned representation of a spoken paragraph should be able to interpret its corresponding manual transcript paragraph as much as possible. With the intention of equipping the ability that can distill the true information from a given spoken paragraph, we further incorporate a multi-task learning strategy in the EV modeling framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Denoising Essence Vector Model", "sec_num": "3.2" }, { "text": "To put the idea into a reality, an additional module, a denoising decoder (\u2022), is introduced on top of the EV model. More formally, given a set of training spoken paragraphs { 1 , \u22ef , , \u22ef , T } and their manual transcripts { 1 , \u22ef , , \u22ef , T }, the EV model can first be constructed by referring to each pair of and the general background information (cf. Section 3.1). Since we target at making the learned paragraph representation contain the true information of , we assume that the weighted combination of and can also be well mapped back to by the decoder (\u2022):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Denoising Essence Vector Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\ufffd \u2022 + \ufffd1 \u2212 \ufffd \u2022 \ufffd = \u2032 ,", "eq_num": "(11)" } ], "section": "The Denoising Essence Vector Model", "sec_num": "3.2" }, { "text": "where (\u2022) is a fully connected neural network with parameter . The activation function used in (\u2022) is the hyperbolic tangent, except that the last layer is the softmax. We will henceforth term this extended unsupervised paragraph embedding method the denoising essence vector (D-EV) model. The training objective of the D-EV model is to minimize the following total KL-divergence measure: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Denoising Essence Vector Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min , , \u210e , \u2211 \ufffd log \u2032 + log \u2032 + log \u2032 \ufffd T =1 .", "eq_num": "(12)" } ], "section": "The Denoising Essence Vector Model", "sec_num": "3.2" }, { "text": "At the outset, we evaluate the proposed EV model on the sentiment polarity classification task. Four widely-used benchmark multi-domain sentiment datasets are used in this study 1 (Blitzer et al., 2007) . They are product reviews taken from Amazon.com in four different domains: Books, DVD, Electronics, and Kitchen. Each of the reviews, ranging from Star-1 to Star-5, were rated by a customer. The reviews with Star-1 and Star-2 were labelled as Negative, and those with Star-4 and Star-5 were labeled as Positive. Each of the four datasets contains 1,000 positive reviews, 1,000 negative reviews, and a number of unlabeled reviews. Labeled reviews in each domain are randomly split up into ten folds (with nine folds serving as the training set and the remaining one as the test set). All of the following results are reported in terms of an average accuracy of ten-fold cross validation. The linear kernel SVM (Chang and Lin, 2011) is used as our classifier and all of the parameters are set to the default values. All of the unlabeled reviews are used to obtain the general background information and train the EV model.", "cite_spans": [ { "start": 180, "end": 202, "text": "(Blitzer et al., 2007)", "ref_id": "BIBREF2" }, { "start": 913, "end": 934, "text": "(Chang and Lin, 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on the EV Model for Sentiment Analysis", "sec_num": "4.1" }, { "text": "In this set of experiments, we first compare the EV model with PCA (Bengio et al., 2013) , which is a standard dimension reduction method. It is worthy to note that PCA is a variation of an auto-encoder (Bengio et al., 2013) method; thus it can be treated as our baseline system. All of the experimental results are listed in Table 1 . As expected, the proposed EV model consistently outperforms PCA in every domain by a significant margin. The reason might be that PCA maps data to a low-dimensional space by maximizing the statistical variance of data, but the implicitly denoising strategy and the linear formulation limit its model capability. On the contrary, the proposed EV model is designed to distill the most useful information from a given paragraph and exclude the general background information explicitly; it thus can deduce a more informative and discriminative representation.", "cite_spans": [ { "start": 67, "end": 88, "text": "(Bengio et al., 2013)", "ref_id": "BIBREF1" }, { "start": 203, "end": 224, "text": "(Bengio et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 326, "end": 333, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments on the EV Model for Sentiment Analysis", "sec_num": "4.1" }, { "text": "Next, we make a step forward to compare the EV model with other baseline systems based on literal bag-of-words features, including unigrams and bigrams. The results are also shown in Table 1 . Several observations can be drawn from the results. First, although bigram features (denotes as Bigrams in Table 1 ) are believed to be more discriminative than unigram features (denotes as Unigrams in Table 1 ), the results indicate that Unigrams outperform Bigrams in most cases. The reason might be probably due to the curse of dimensionality problem. Second, as expected, the combination of unigram and bigram features (denotes as Unigrams+Bigrams) achieves better results than using Unigrams and Bigrams in isolation for all cases. Third, both the proposed EV model and PCA can make further performance gains when paired with Unigrams, Bigrams, and their combination. Fourth, the proposed EV model demonstrates its ability in the sentiment classification task since it consistently outperforms PCA for all cases in the experiments. ", "cite_spans": [], "ref_spans": [ { "start": 183, "end": 190, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 300, "end": 308, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 396, "end": 403, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments on the EV Model for Sentiment Analysis", "sec_num": "4.1" }, { "text": "We further investigate the capability of the EV model on an extractive multi-document summarization task. In this study, we carry out the experiments with the DUC 2001, 2002, and 2004 datasets 2 . All the documents were compiled from newswires, and were grouped into various thematic clusters. The summary length was limited to 100 words for both DUC 2001 and DUC 2002, and 665 bytes for DUC 2004. The general background information was inferred from the LDC Gigaword corpus 3 (including Associated Press Worldstream (AP), New York Times Newswire Service (NYT), and Xinhua News Agency (XIN)). The most common belief in the document summarization community is that relevance and redundancy are two key factors for generating a concise summary. In this paper, we leverage a density peaks clustering summarization method (Rodriguez and Laio, 2014; Zhang et al., 2015) , which can take both relevance and redundancy information into account at the same time. That is, a concise summary for a given document set can be automatically generated through a one-pass process instead of an iterative process. Recently, the summarization method has proven its empirical effectiveness (Zhang et al., 2015) . For evaluation, we adopt the widely-used automatic evaluation metric ROUGE (Lin, 2003) , and take ROUGE-1 and ROUGE-2 (in F-scores) as the main measures following Cao et al., (2015) .", "cite_spans": [ { "start": 818, "end": 844, "text": "(Rodriguez and Laio, 2014;", "ref_id": "BIBREF30" }, { "start": 845, "end": 864, "text": "Zhang et al., 2015)", "ref_id": "BIBREF35" }, { "start": 1172, "end": 1192, "text": "(Zhang et al., 2015)", "ref_id": "BIBREF35" }, { "start": 1270, "end": 1281, "text": "(Lin, 2003)", "ref_id": "BIBREF18" }, { "start": 1358, "end": 1376, "text": "Cao et al., (2015)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on the EV Model for Multi-Document Summarization", "sec_num": "4.2" }, { "text": "We compare the proposed EV model with two baseline systems (the vector space model (VSM) (Gong and Liu, 2001 ) and the LexRank (Erkan and Radev, 2004) method), the best peer systems (including Peer T, Peer 26, and Peer 65) participating DUC evaluations, and the recently elaborated DNN-based systems (including CNN and PriorSum) (Cao et al., 2015) . Owing to the space limitation, we omit the detailed introduction to these summarization methods; interested readers may refer to Penn and Zhu (2008) , Liu and Hakkani-Tur (2011) , Nenkova and McKeown (2011) , and Cao et al., (2015) for more in-depth elaboration. It is worthy to note that the proposed EV model, the two baseline systems, and the best peer systems are unsupervised methods, while the DNN-based systems are supervised ones. The experimental results are listed in Table 2 . Several interesting observations can be concluded from the results. First, the proposed EV model outperforms VSM by a large margin in all cases, and performs comparably to other well-designed unsupervised summarization methods. Second, both LexRank and EV (with the density peaks clustering method) take pairwise information into account globally, so their results are almost the same. Third, although the proposed EV model is an unsupervised method and is not specifically designed toward summarization, it almost achieves the same performance level as the complicated DNN-based supervised methods (i.e., CNN and PriorSum), which confirms the power of the EV model again.", "cite_spans": [ { "start": 89, "end": 108, "text": "(Gong and Liu, 2001", "ref_id": "BIBREF11" }, { "start": 127, "end": 150, "text": "(Erkan and Radev, 2004)", "ref_id": "BIBREF9" }, { "start": 329, "end": 347, "text": "(Cao et al., 2015)", "ref_id": "BIBREF3" }, { "start": 479, "end": 498, "text": "Penn and Zhu (2008)", "ref_id": "BIBREF27" }, { "start": 501, "end": 527, "text": "Liu and Hakkani-Tur (2011)", "ref_id": "BIBREF20" }, { "start": 530, "end": 556, "text": "Nenkova and McKeown (2011)", "ref_id": "BIBREF23" }, { "start": 563, "end": 581, "text": "Cao et al., (2015)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 828, "end": 835, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiments on the EV Model for Multi-Document Summarization", "sec_num": "4.2" }, { "text": "In order to assess the utility of the proposed D-EV model, we perform a series of experiments on the extractive spoken document summarization task. All of experiments are conducted on a Mandarin benchmark broadcast new corpus 4 (Wang et al., 2005) . The MATBN dataset is publicly available and has been widely used to evaluate several NLP-related tasks, including speech recognition (Chien, 2015) , information retrieval (Huang and Wu, 2007) and summarization . As such, we follow the experimental setting used in previous studies for speech summarization in the literature. The vocabulary size is about 72 thousand words. The average word error rate of the automatic transcripts of these broadcast news documents is about 38%. The reference summaries were generated by ranking the sentences in the manual transcript of a broadcast news document by importance without assigning a score to each sentence. Each document has three reference summaries annotated by three subjects. For the assessment of summarization performance, we adopt the commonly-used ROUGE metric (Lin, 2003) , and take ROUGE-1, ROUGE-2 and ROUGE-L (in F-scores) as the main measures. The summarization ratio is set to 10%. An external set of about 100,000 text news documents, which was assembled by the Central News Agency (CNA) during the same period as the broadcast news documents to be summarized (extracted from the Chinese Gigaword Corpus 5 released by LDC), is used to obtain the background representation.", "cite_spans": [ { "start": 228, "end": 247, "text": "(Wang et al., 2005)", "ref_id": "BIBREF34" }, { "start": 383, "end": 396, "text": "(Chien, 2015)", "ref_id": "BIBREF7" }, { "start": 421, "end": 441, "text": "(Huang and Wu, 2007)", "ref_id": "BIBREF14" }, { "start": 1066, "end": 1077, "text": "(Lin, 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on the D-EV Model for Spoken Document Summarization", "sec_num": "4.3" }, { "text": "To begin with, we compare the performance levels of the proposed EV and D-EV models and two classic paragraph embedding methods (i.e., DM and DBOW) for spoken document summarization. All the models are paired with the density peaks clustering summarization method. The results are shown in Table 3 , from which several observations can be drawn. First, DBOW outperforms DM in our experiments, though DBOW is a simplified version of DM. Second, the proposed EV model outperforms DM and DBOW by a large margin, as expected. The results confirm that EV can modulate the impact of those stop or function words when inferring representations for paragraphs. That is to say, the proposed paragraph embedding method EV can indeed distill the most important aspects of a given paragraph and meanwhile suppress the impact of the general background information for producing a more discriminative paragraph representation. Thus, the relevance degree between any pair of sentence and document representations can be estimated more accurately. Third, the D-EV model consistently outperforms other paragraph embedding methods, including our own EV model. The outcome reveals that, although EV can achieve better performance than other classic paragraph embedding methods, the recognition errors inevitably make the inferred representations deviate from the original semantic content of spoken paragraphs. Accordingly, the results signal that the D-EV model can complement the 4 http://slam.iis.sinica.edu.tw/corpus/MATBN-corpus.htm 5 https://catalog.ldc.upenn.edu/LDC2011T13 deficiency of the EV model in spoken document summarization; we thus believe that it is more suitable for use in spoken content processing.", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 297, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments on the D-EV Model for Spoken Document Summarization", "sec_num": "4.3" }, { "text": "In the last set of experiments, we compare the results mentioned above with that of several wellpracticed, state-of-the-art unsupervised summarization methods, including the graph-based methods (i.e., the Markov random walk (MRW) method (Wan and Yang, 2008) and the LexRank method (Erkan and Radev, 2004) ) and the combinatorial optimization methods (i.e., the submodularity-based (SM) method (Lin and Bilmes, 2010) and the integer linear programming (ILP) method (Riedhammer et al., 2010) ). Among them, the ability of reducing redundant information has been aptly incorporated into the submodular-based method and the ILP method. Interested readers may refer to Penn and Zhu (2008) , Liu and Hakkani-Tur (2011) , and Nenkova and McKeown (2011) for comprehensive reviews and new insights into the major methods that have been developed and applied with good success to a wide range of spoken document summarization tasks. The results are also listed in Table 3 . Several noteworthy observations can be drawn from the results of these methods. First, although the two graph-based methods (i.e., MRW and LexRank) have similar motivations, MRW outperforms LexRank by a large margin. Second, although both SM and ILP have the ability to reduce redundant information when selecting indicative sentences to form a summary for a given document, ILP consistently outperforms SM. The reason might be that ILP performs a global optimization process to select representative sentences, whereas SM chooses sentences with a recursive strategy. Comparing the results of these strong baseline systems to that of the paragraph embedding methods (including DM, DBOW, EV, and D-EV) paired with the density peaks clustering summarization method, it is clear that all the paragraph embedding methods are better than the baseline methods. The results corroborate that, instead of only considering literal term matching for determining the similarity degree between a pair of sentence and document, incorporating concept (semantic) matching into the similarity measure leads to better performance. In particular, the proposed D-EV model is the most robust among all the methods compared in the paper, which supports the important notion of the proposed \"learning to distilling\" framework. We also want to note that the proposed methods (i.e., EV and D-EV) can also be incorporated with the graph-based methods and the combinatorial optimization methods. We leave this exploration for future work.", "cite_spans": [ { "start": 237, "end": 257, "text": "(Wan and Yang, 2008)", "ref_id": "BIBREF33" }, { "start": 281, "end": 304, "text": "(Erkan and Radev, 2004)", "ref_id": "BIBREF9" }, { "start": 393, "end": 415, "text": "(Lin and Bilmes, 2010)", "ref_id": "BIBREF19" }, { "start": 464, "end": 489, "text": "(Riedhammer et al., 2010)", "ref_id": "BIBREF29" }, { "start": 664, "end": 683, "text": "Penn and Zhu (2008)", "ref_id": "BIBREF27" }, { "start": 686, "end": 712, "text": "Liu and Hakkani-Tur (2011)", "ref_id": "BIBREF20" }, { "start": 719, "end": 745, "text": "Nenkova and McKeown (2011)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 954, "end": 961, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments on the D-EV Model for Spoken Document Summarization", "sec_num": "4.3" }, { "text": "In this paper, we have proposed a novel paragraph embedding framework, which is embodied with the essence vector (EV) model and the denoising essence vector (D-EV) model, and made a step forward to evaluate the proposed methods on benchmark sentiment classification and document summarization tasks. Experimental results demonstrate that the proposed framework is the most robust among all the methods (including several well-practiced or/and state-of-the-art methods) compared in the paper, thereby indicating the potential of the new paragraph embedding framework. For future work, we will first focus on pairing the (denoising) essence vector model with other summarization methods. Moreover, we will explore other effective ways to integrate extra cues, such as speaker identities and relevance information, into the proposed framework. Furthermore, we also plan to extend the applications of the proposed framework to information retrieval and language modeling, among others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "http://www-nlpir.nist.gov/projects/duc/ 3 https://catalog.ldc.upenn.edu/LDC2011T07", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Rejean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "3", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research (3):1137-1155.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Representation learning: a review and new perspectives. Pattern Analysis and Machine Intelligence", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" } ], "year": 2013, "venue": "", "volume": "35", "issue": "", "pages": "1798--1828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: a review and new perspec- tives. Pattern Analysis and Machine Intelligence, 35(8):1798-1828.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification", "authors": [ { "first": "John", "middle": [], "last": "Blitzer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "187--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: do- main adaptation for sentiment classification. In Proceedings of ACL, pages 187-205.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning Summary Prior Representation for Extractive Summarization", "authors": [ { "first": "Ziqiang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Houfeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "829--833", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziqiang Cao, Furu Wei, Sujian Li, Wenjie Li, Ming Zhou, and Houfeng Wang. 2015. Learning Summary Prior Representation for Extractive Summarization. In Proceedings of ACL, pages 829-833.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The use of MMR, diversity based reranking for reordering documents and producing summaries", "authors": [ { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Jade", "middle": [], "last": "Goldstein", "suffix": "" } ], "year": 1998, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "335--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaime Carbonell and Jade Goldstein. 1998. The use of MMR, diversity based reranking for reordering documents and producing summaries. In Proceedings of SIGIR, pages 335-336.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "LIBSVM: a library for support vector machines", "authors": [ { "first": "Chih-Chung", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Chih-Jen", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "ACM Transactions on Intelligent Systems and Technology", "volume": "2", "issue": "27", "pages": "1--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(27):1-27.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "I-vector based language modeling for spoken document retrieval", "authors": [ { "first": "Hung-Shin", "middle": [], "last": "Kuan-Yu Chen", "suffix": "" }, { "first": "Hsin-Min", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ICASSP", "volume": "", "issue": "", "pages": "7083--7088", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuan-Yu Chen, Hung-Shin Lee, Hsin-Min Wang, and Berlin Chen. 2014. I-vector based language modeling for spoken document retrieval. In Proceedings of ICASSP, pages 7083-7088.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hierarchical Pitman-Yor-Dirichlet language model", "authors": [ { "first": "Jen-Tzung", "middle": [], "last": "Chien", "suffix": "" } ], "year": 2015, "venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing", "volume": "23", "issue": "8", "pages": "1259--1272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jen-Tzung Chien. 2015. Hierarchical Pitman-Yor-Dirichlet language model. IEEE/ACM Transactions on Audio, Speech and Language Processing, 23(8): 1259-1272.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A unified architecture for natural language processing: deep neural networks with multitask learning", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "160--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of ICML, pages 160-167.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "LexRank: Graph-based lexical centrality as salience in text summarization", "authors": [ { "first": "Gunes", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2004, "venue": "Journal of Artificial Intelligent Research", "volume": "22", "issue": "1", "pages": "457--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gunes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summa- rization. Journal of Artificial Intelligent Research, 22(1):457-479.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Fundamental technologies in modern speech recognition", "authors": [ { "first": "Sadaoki", "middle": [], "last": "Furui", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Gales", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Keiichi", "middle": [], "last": "Tokuda", "suffix": "" } ], "year": 2012, "venue": "IEEE Signal Processing Magazine", "volume": "29", "issue": "6", "pages": "16--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sadaoki Furui, Li Deng, Mark Gales, Hermann Ney, and Keiichi Tokuda. 2012. Fundamental technologies in modern speech recognition. IEEE Signal Processing Magazine, 29(6):16-17.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generic text summarization using relevance measure and latent semantic analysis", "authors": [ { "first": "Yihong", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2001, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "19--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yihong Gong and Xin Liu. 2001. Generic text summarization using relevance measure and latent semantic analysis. In Proceedings of SIGIR, pages 19-25.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Deep Learning", "authors": [ { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. Cambridge, MA: MIT Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning deep structured semantic models for web search using clickthrough data", "authors": [ { "first": "Po-Sen", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Acero", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2013, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "2333--2338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of CIKM, pages 2333-2338.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Spoken Document Retrieval Using Multi-Level Knowledge and Semantic Verification", "authors": [ { "first": "Lin", "middle": [], "last": "Chien", "suffix": "" }, { "first": "Chung-Hsien", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on Audio, Speech, and Language Processing", "volume": "15", "issue": "8", "pages": "2551--2560", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chien-Lin Huang and Chung-Hsien Wu. 2007. Spoken Document Retrieval Using Multi-Level Knowledge and Semantic Verification. IEEE Transactions on Audio, Speech, and Language Processing, 15(8): 2551-2560.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Extractive summarization using continuous vector space models", "authors": [ { "first": "Mikael", "middle": [], "last": "Kageback", "suffix": "" }, { "first": "Olof", "middle": [], "last": "Mogren", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Tahmasebi", "suffix": "" }, { "first": "Devdatt", "middle": [], "last": "Dubhashi", "suffix": "" } ], "year": 2014, "venue": "Proceedings of CVSC", "volume": "", "issue": "", "pages": "31--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikael Kageback, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. 2014. Extractive summarization using continuous vector space models. In Proceedings of CVSC, pages 31-39.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "ADAM: A method for stochastic optimization", "authors": [ { "first": "Diederik", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik Kingma and Jimmy Ba. 2015. ADAM: A method for stochastic optimization. In Proceedings of ICLR, pages 1-15.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of ICML, pages 1188-1196.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "ROUGE: Recall-oriented understudy for gisting evaluation", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2003. ROUGE: Recall-oriented understudy for gisting evaluation. [Online]. Available: http://haydn.isi.edu/ROUGE/.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Multi-document summarization via budgeted maximization of submodular functions", "authors": [ { "first": "Hui", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2010, "venue": "Proceedings of NAACL HLT", "volume": "", "issue": "", "pages": "912--920", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Lin and Jeff Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular func- tions. In Proceedings of NAACL HLT, pages 912-920.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Speech summarization. Chapter 13 in Spoken Language Understanding: Systems for Extracting Semantic Information from", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu and Dilek Hakkani-Tur. 2011. Speech summarization. Chapter 13 in Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. G. Tur and R. D. Mori (Eds), New York: Wiley.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Combining relevance language modeling and clarity measure for extractive speech summarization", "authors": [ { "first": "Shih-Hung", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kuan-Yu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hsin-Min", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wen-Lian", "middle": [], "last": "Hsu-Chun Yen", "suffix": "" }, { "first": "", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2015, "venue": "Speech, and Language Processing", "volume": "23", "issue": "", "pages": "957--969", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shih-Hung Liu, Kuan-Yu Chen, Berlin Chen, Hsin-Min Wang, Hsu-Chun Yen, and Wen-Lian Hsu. 2015. Com- bining relevance language modeling and clarity measure for extractive speech summarization. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(6): 957-969.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of ICLR, pages 1-12.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Automatic summarization. Foundations and Trends in Information Retrieval", "authors": [ { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2011, "venue": "", "volume": "5", "issue": "", "pages": "103--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval, 5(2-3): 103-233.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Speech technology and information access", "authors": [ { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2008, "venue": "IEEE Signal Processing Magazine", "volume": "25", "issue": "3", "pages": "150--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mari Ostendorf. 2008. Speech technology and information access. IEEE Signal Processing Magazine, 25(3):150- 152.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Spoken document understanding and organization", "authors": [ { "first": "Berlin", "middle": [], "last": "Lin-Shan Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2005, "venue": "IEEE Signal Processing Magazine", "volume": "22", "issue": "5", "pages": "42--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin-shan Lee and Berlin Chen. 2005. Spoken document understanding and organization. IEEE Signal Processing Magazine, 22(5):42-60.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Deep sentence embedding using the long short term memory network: analysis and application to information retrieval", "authors": [ { "first": "Hamid", "middle": [], "last": "Palangi", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Yelong", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianshu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xinying", "middle": [], "last": "Song", "suffix": "" }, { "first": "Rabab", "middle": [], "last": "Ward", "suffix": "" } ], "year": 2015, "venue": "Proceedings", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1502.06922" ] }, "num": null, "urls": [], "raw_text": "Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. 2015. Deep sentence embedding using the long short term memory network: analysis and application to information retrieval. In Proceedings of arXiv:1502.06922.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A critical reassessment of evaluation baselines for speech summarization", "authors": [ { "first": "Gerald", "middle": [], "last": "Penn", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "470--478", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerald Penn and Xiaodan Zhu. 2008. A critical reassessment of evaluation baselines for speech summarization. In Proceedings of ACL, pages 470-478.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "GloVe: Global vector for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vector for word represen- tation. In Proceedings of EMNLP, pages 1532-1543.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Long story short -Global unsupervised models for keyphrase based meeting summarization", "authors": [ { "first": "Korbinian", "middle": [], "last": "Riedhammer", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Benoit Favre", "suffix": "" }, { "first": "", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2010, "venue": "Speech Communication", "volume": "52", "issue": "10", "pages": "801--815", "other_ids": {}, "num": null, "urls": [], "raw_text": "Korbinian Riedhammer, Benoit Favre, and Dilek Hakkani-Tur. 2010. Long story short -Global unsupervised mod- els for keyphrase based meeting summarization. Speech Communication, 52(10):801-815.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Clustering by fast search and find of density peaks", "authors": [ { "first": "Alex", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Laio", "suffix": "" } ], "year": 2014, "venue": "Science", "volume": "344", "issue": "6191", "pages": "1492--1496", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Rodriguez and Alessandro Laio. 2014. Clustering by fast search and find of density peaks. Science, 344(6191): 1492-1496.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning sentiment-specific word embedding for twitter sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "1555--1565", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In Proceedings of ACL, pages 1555-1565.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Automatic text summarization", "authors": [], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juan-Manuel Torres-Moreno (Eds.). 2014. Automatic text summarization. WILEY-ISTE.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Multi-document summarization using cluster-based link analysis", "authors": [ { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianwu", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of SIGIR", "volume": "", "issue": "", "pages": "299--306", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaojun Wan and Jianwu Yang. 2008. Multi-document summarization using cluster-based link analysis. In Pro- ceedings of SIGIR, pages 299-306.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "MATBN: A Mandarin Chinese broadcast news corpus", "authors": [ { "first": "Hsin-Min", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jen-Wei", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "Shih-Sian", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2005, "venue": "International Journal of Computational Linguistics and Chinese Language Processing", "volume": "10", "issue": "2", "pages": "219--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsin-Min Wang, Berlin Chen, Jen-Wei Kuo, and Shih-Sian Cheng. 2005. MATBN: A Mandarin Chinese broad- cast news corpus. International Journal of Computational Linguistics and Chinese Language Processing, 10(2):219-236.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Clustering sentences with density peaks for multidocument summarization", "authors": [ { "first": "Yang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yunqing", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wenmin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "1262--1267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Zhang, Yunqing Xia, Yi Liu, and Wenmin Wang. 2015. Clustering sentences with density peaks for multi- document summarization. In Proceedings of NAACL, pages 1262-1267.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Illustrations of the essence vector model. attention coefficients, and the Adam(Kingma and Ba, 2015) is employed to solve the optimization problem. At test time, a given paragraph can obtain its own representation by being passed through the paragraph encoder (i.e., (\u2022)).Figure 1illustrates the architecture of the EV model.", "uris": null }, "FIGREF1": { "num": null, "type_str": "figure", "text": "illustrates the architecture of the proposed D-EV model.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Illustrations of the denoising essence vector model.", "uris": null }, "TABREF0": { "content": "
Since each
learned paragraph representationonly contains the most informative/discriminative part of, we
assume that the weighted combination ofandcan be mapped back toby a decoder \u210e(\u2022):
", "num": null, "type_str": "table", "html": null, "text": "Both (\u2022) and (\u2022) are fully connected deep networks with different model parameters and , respectively. It is worthy to note that (\u2022) and (\u2022) can have same or different architectures." }, "TABREF1": { "content": "
BooksDVDElectronicsKitchenAverage
PCA0.7620.7690.8070.8240.790
EV0.7960.8120.8390.8580.826
Unigrams0.7970.8050.8370.8600.824
Bigrams0.7980.7790.8190.8570.813
Unigrams+Bigrams0.8100.8210.8520.8840.842
Unigrams+PCA0.7990.8120.8350.8600.826
Unigrams+EV0.8060.8130.8330.8710.831
Unigrams+Bigrams+PCA0.8100.8210.8520.8840.842
Unigrams+Bigrams+EV0.8380.8240.8620.8900.853
", "num": null, "type_str": "table", "html": null, "text": "https://www.cs.jhu.edu/~mdredze/datasets/sentiment/ Experimental results on sentiment analysis achieved by the proposed EV model and other baseline features, including unigrams, bigrams, PCA, and the combinations." }, "TABREF3": { "content": "", "num": null, "type_str": "table", "html": null, "text": "Experimental results of multi-document summarization achieved by the proposed EV model and several state-of-the-art summarization methods." }, "TABREF5": { "content": "
", "num": null, "type_str": "table", "html": null, "text": "Experimental results of spoken document summarization achieved by the proposed EV and D-EV models and several state-of-the-art summarization methods." } } } }