{ "paper_id": "P16-1022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:55:53.584051Z" }, "title": "Compressing Neural Language Models by Sparse Word Representations", "authors": [ { "first": "Yunchuan", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of High Confidence Software Technologies (Peking University), MoE", "institution": "", "location": { "country": "China" } }, "email": "chenyunchuan11@mails.ucas.ac.cn" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of High Confidence Software Technologies (Peking University), MoE", "institution": "", "location": { "country": "China" } }, "email": "" }, { "first": "Yan", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of High Confidence Software Technologies (Peking University), MoE", "institution": "", "location": { "country": "China" } }, "email": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of High Confidence Software Technologies (Peking University), MoE", "institution": "", "location": { "country": "China" } }, "email": "lige@pku.edu" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "", "affiliation": { "laboratory": "Key Laboratory of High Confidence Software Technologies (Peking University), MoE", "institution": "", "location": { "country": "China" } }, "email": "zhijin@pku.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural networks are among the state-ofthe-art techniques for language modeling. Existing neural language models typically map discrete words to distributed, dense vector representations. After information processing of the preceding context words by hidden layers, an output layer estimates the probability of the next word. Such approaches are time-and memory-intensive because of the large numbers of parameters for word embeddings and the output layer. In this paper, we propose to compress neural language models by sparse word representations. In the experiments, the number of parameters in our model increases very slowly with the growth of the vocabulary size, which is almost imperceptible. Moreover, our approach not only reduces the parameter space to a large extent, but also improves the performance in terms of the perplexity measure. 1", "pdf_parse": { "paper_id": "P16-1022", "_pdf_hash": "", "abstract": [ { "text": "Neural networks are among the state-ofthe-art techniques for language modeling. Existing neural language models typically map discrete words to distributed, dense vector representations. After information processing of the preceding context words by hidden layers, an output layer estimates the probability of the next word. Such approaches are time-and memory-intensive because of the large numbers of parameters for word embeddings and the output layer. In this paper, we propose to compress neural language models by sparse word representations. In the experiments, the number of parameters in our model increases very slowly with the growth of the vocabulary size, which is almost imperceptible. Moreover, our approach not only reduces the parameter space to a large extent, but also improves the performance in terms of the perplexity measure. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Language models (LMs) play an important role in a variety of applications in natural language processing (NLP), including speech recognition and document recognition. In recent years, neural network-based LMs have achieved significant breakthroughs: they can model language more precisely than traditional n-gram statistics (Mikolov et al., 2011) ; it is even possible to generate new sentences from a neural LM, benefiting various downstream tasks like machine translation, summarization, and dialogue systems (Devlin et al., 2014; Rush et al., 2015; Sordoni et al., 2015; Mou et al., 2015b) . 1 Code released on https://github.com/chenych11/lm Existing neural LMs typically map a discrete word to a distributed, real-valued vector representation (called embedding) and use a neural model to predict the probability of each word in a sentence. Such approaches necessitate a large number of parameters to represent the embeddings and the output layer's weights, which is unfavorable in many scenarios. First, with a wider application of neural networks in resourcerestricted systems (Hinton et al., 2015) , such approach is too memory-consuming and may fail to be deployed in mobile phones or embedded systems. Second, as each word is assigned with a dense vector-which is tuned by gradient-based methods-neural LMs are unlikely to learn meaningful representations for infrequent words. The reason is that infrequent words' gradient is only occasionally computed during training; thus their vector representations can hardly been tuned adequately.", "cite_spans": [ { "start": 324, "end": 346, "text": "(Mikolov et al., 2011)", "ref_id": "BIBREF19" }, { "start": 511, "end": 532, "text": "(Devlin et al., 2014;", "ref_id": "BIBREF5" }, { "start": 533, "end": 551, "text": "Rush et al., 2015;", "ref_id": "BIBREF26" }, { "start": 552, "end": 573, "text": "Sordoni et al., 2015;", "ref_id": "BIBREF27" }, { "start": 574, "end": 592, "text": "Mou et al., 2015b)", "ref_id": "BIBREF25" }, { "start": 595, "end": 596, "text": "1", "ref_id": null }, { "start": 1083, "end": 1104, "text": "(Hinton et al., 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a compressed neural language model where we can reduce the number of parameters to a large extent. To accomplish this, we first represent infrequent words' embeddings with frequent words' by sparse linear combinations. This is inspired by the observation that, in a dictionary, an unfamiliar word is typically defined by common words. We therefore propose an optimization objective to compute the sparse codes of infrequent words. The property of sparseness (only 4-8 values for each word) ensures the efficiency of our model. Based on the pre-computed sparse codes, we design our compressed language model as follows. A dense embedding is assigned to each common word; an infrequent word, on the other hand, computes its vector representation by a sparse combination of common words' embeddings. We use the long short term memory (LSTM)-based recurrent neural network (RNN) as the hidden layer of our model. The weights of the output layer are also compressed in a same way as embeddings. Consequently, the number of trainable neural parameters is a constant regardless of the vocabulary size if we ignore the biases of words. Even considering sparse codes (which are very small), we find the memory consumption grows imperceptibly with respect to the vocabulary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate our LM on the Wikipedia corpus containing up to 1.6 billion words. During training, we adopt noise-contrastive estimation (NCE) (Gutmann and Hyv\u00e4rinen, 2012) to estimate the parameters of our neural LMs. However, different from Mnih and Teh (2012) , we tailor the NCE method by adding a regression layer (called ZRegressoion) to predict the normalization factor, which stabilizes the training process. Experimental results show that, our compressed LM not only reduces the memory consumption, but also improves the performance in terms of the perplexity measure.", "cite_spans": [ { "start": 140, "end": 169, "text": "(Gutmann and Hyv\u00e4rinen, 2012)", "ref_id": "BIBREF8" }, { "start": 240, "end": 259, "text": "Mnih and Teh (2012)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To sum up, the main contributions of this paper are three-fold. 1We propose an approach to represent uncommon words' embeddings by a sparse linear combination of common ones'. 2We propose a compressed neural language model based on the pre-computed sparse codes. The memory increases very slowly with the vocabulary size (4-8 values for each word). (3) We further introduce a ZRegression mechanism to stabilize the NCE algorithm, which is potentially applicable to other LMs in general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Language modeling aims to minimize the joint probability of a corpus (Jurafsky and Martin, 2014) . Traditional n-gram models impose a Markov assumption that a word is only dependent on previous n \u2212 1 words and independent of its position. When estimating the parameters, researchers have proposed various smoothing techniques including back-off models to alleviate the problem of data sparsity. propose to use a feedforward neural network (FFNN) to replace the multinomial parameter estimation in n-gram models. Recurrent neural networks (RNNs) can also be used for language modeling; they are especially capable of capturing long range dependencies in sentences (Mikolov et al., 2010; Sundermeyer et ", "cite_spans": [ { "start": 69, "end": 96, "text": "(Jurafsky and Martin, 2014)", "ref_id": "BIBREF13" }, { "start": 663, "end": 685, "text": "(Mikolov et al., 2010;", "ref_id": "BIBREF18" }, { "start": 686, "end": 700, "text": "Sundermeyer et", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Standard Neural LMs", "sec_num": "2.1" }, { "text": "In the above models, we can view that a neural LM is composed of three main parts, namely the Embedding, Encoding, and Prediction subnets, as shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 159, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "al., 2015).", "sec_num": null }, { "text": "The Embedding subnet maps a word to a dense vector, representing some abstract features of the word (Mikolov et al., 2013) . Note that this subnet usually accepts a list of words (known as history or context words) and outputs a sequence of word embeddings.", "cite_spans": [ { "start": 100, "end": 122, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "al., 2015).", "sec_num": null }, { "text": "The Encoding subnet encodes the history of a target word into a dense vector (known as context or history representation). We may either leverage FFNNs or RNNs (Mikolov et al., 2010) as the Encoding subnet, but RNNs typically yield a better performance (Sundermeyer et al., 2015) .", "cite_spans": [ { "start": 160, "end": 182, "text": "(Mikolov et al., 2010)", "ref_id": "BIBREF18" }, { "start": 253, "end": 279, "text": "(Sundermeyer et al., 2015)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "al., 2015).", "sec_num": null }, { "text": "The Prediction subnet outputs a distribution of target words as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "al., 2015).", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(w = w i |h) = exp(s(h, w i )) j exp(s(h, w j )) ,", "eq_num": "(1)" } ], "section": "al., 2015).", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s(h, w i ) =W i h + b i ,", "eq_num": "(2)" } ], "section": "al., 2015).", "sec_num": null }, { "text": "where h is the vector representation of context/history h, obtained by the Encoding subnet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "al., 2015).", "sec_num": null }, { "text": "W = (W 1 , W 2 , . . . , W V ) \u2208 R C\u00d7V is the output weights of Prediction; b = (b 1 , b 2 , . . . , b V ) \u2208 R C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "al., 2015).", "sec_num": null }, { "text": "is the bias (the prior). s(h, w i ) is a scoring function indicating the degree to which the context h matches a target word w i . (V is the size of vocabulary V; C is the dimension of context/history, given by the Encoding subnet.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "al., 2015).", "sec_num": null }, { "text": "Neural network-based LMs can capture more precise semantics of natural language than n-gram models because the regularity of the Embedding subnet extracts meaningful semantics of a word and the high capacity of Encoding subnet enables complicated information processing. Despite these, neural LMs also suffer from several disadvantages mainly out of complexity concerns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Concerns of Neural LMs", "sec_num": "2.2" }, { "text": "Time complexity. Training neural LMs is typically time-consuming especially when the vocabulary size is large. The normalization factor in Equation (1) contributes most to time complexity. Morin and Bengio (2005) propose hierarchical softmax by using a Bayesian network so that the probability is self-normalized. Sampling techniques-for example, importance sampling (Bengio and Sen\u00e9cal, 2003) , noise-contrastive estimation (Gutmann and Hyv\u00e4rinen, 2012) , and target sampling (Jean et al., 2014) -are applied to avoid computation over the entire vocabulary. Infrequent normalization maximizes the unnormalized likelihood with a penalty term that favors normalized predictions (Andreas and Klein, 2014) .", "cite_spans": [ { "start": 189, "end": 212, "text": "Morin and Bengio (2005)", "ref_id": "BIBREF23" }, { "start": 367, "end": 393, "text": "(Bengio and Sen\u00e9cal, 2003)", "ref_id": "BIBREF1" }, { "start": 425, "end": 454, "text": "(Gutmann and Hyv\u00e4rinen, 2012)", "ref_id": "BIBREF8" }, { "start": 477, "end": 496, "text": "(Jean et al., 2014)", "ref_id": "BIBREF12" }, { "start": 677, "end": 702, "text": "(Andreas and Klein, 2014)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Complexity Concerns of Neural LMs", "sec_num": "2.2" }, { "text": "Memory complexity and model complexity. The number of parameters in the Embedding and Prediction subnets in neural LMs increases linearly with respect to the vocabulary size, which is large (Table 1 ). As said in Section 1, this is sometimes unfavorable in memory-restricted systems. Even with sufficient hardware resources, it is problematic because we are unlikely to fully tune these parameters. Chen et al. (2015) propose the differentiated softmax model by assigning fewer parameters to rare words than to frequent words. However, their approach only handles the output weights, i.e., W in Equation 2; the input embeddings remain uncompressed in their approach.", "cite_spans": [ { "start": 399, "end": 417, "text": "Chen et al. (2015)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 190, "end": 198, "text": "(Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Complexity Concerns of Neural LMs", "sec_num": "2.2" }, { "text": "In this work, we mainly focus on memory and model complexity, i.e., we propose a novel method to compress the Embedding and Prediction subnets in neural language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity Concerns of Neural LMs", "sec_num": "2.2" }, { "text": "Existing work on model compression for neural networks. Bucilu\u01ce et al. (2006) ", "cite_spans": [ { "start": 56, "end": 77, "text": "Bucilu\u01ce et al. (2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.3" }, { "text": "Embedding V E V E Encoding 4(CE + C 2 + C) nCE + C Prediction V (C + 1) V (C + 1) TOTAL \u2020 O((C + E)V ) O((E + C)V )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.3" }, { "text": "V C (or E).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.3" }, { "text": "compression typically works with a compromise of performance. On the contrary, our model improves the perplexity measure after compression. Sparse word representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.3" }, { "text": "We leverage sparse codes of words to compress neural LMs. Faruqui et al. (2015) propose a sparse coding method to represent each word with a sparse vector. They solve an optimization problem to obtain the sparse vectors of words as well as a dictionary matrix simultaneously. By contrast, we do not estimate any dictionary matrix when learning sparse codes, which results in a simple and easyto-optimize model.", "cite_spans": [ { "start": 58, "end": 79, "text": "Faruqui et al. (2015)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2.3" }, { "text": "In this section, we describe our compressed language model in detail. Subsection 3.1 formalizes the sparse representation of words, serving as the premise of our model. On such a basis, we compress the Embedding and Prediction subnets in Subsections 3.2 and 3.3, respectively. Finally, Subsection 3.4 introduces NCE for parameter estimation where we further propose the ZRegression mechanism to stabilize our model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Proposed Model", "sec_num": "3" }, { "text": "We split the vocabulary V into two disjoint subsets (B and C). The first subset B is a base set, containing a fixed number of common words (8k in our experiments). C = V\\B is a set of uncommon words. We would like to use B's word embeddings to encode C's.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "Our intuition is that oftentimes a word can be defined by a few other words, and that rare words should be defined by common ones. Therefore, it is reasonable to use a few common words' embeddings to represent that of a rare word. Following most work in the literature (Lee et al., 2006; Yang et al., 2011) , we represent each uncommon word with a sparse, linear combination of com-mon ones' embeddings. The sparse coefficients are called a sparse code for a given word.", "cite_spans": [ { "start": 269, "end": 287, "text": "(Lee et al., 2006;", "ref_id": "BIBREF16" }, { "start": 288, "end": 306, "text": "Yang et al., 2011)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "We first train a word representation model like SkipGram (Mikolov et al., 2013) to obtain a set of embeddings for each word in the vocabulary, including both common words and rare words. Suppose U = (U 1 , U 2 , . . . , U B ) \u2208 R E\u00d7B is the (learned) embedding matrix of common words, i.e., U i is the embedding of i-th word in B. (Here, B = |B|.)", "cite_spans": [ { "start": 57, "end": 79, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "Each word in B has a natural sparse code (denoted as x): it is a one-hot vector with B elements, the i-th dimension being on for the i-th word in B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "For a word w \u2208 C, we shall learn a sparse vector x = (x 1 , x 2 , . . . , x B ) as the sparse code of the word. Provided that x has been learned (which will be introduced shortly), the embedding of w i\u015d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w = B j=1 x j U j = U x,", "eq_num": "(3)" } ], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "To learn the sparse representation of a certain word w, we propose the following optimization objective", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min x U x \u2212 w 2 2 + \u03b1 x 1 + \u03b2|1 x \u2212 1| + \u03b31 max{0, \u2212x},", "eq_num": "(4)" } ], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "where max denotes the component-wise maximum; w is the embedding for a rare word w \u2208 C. The first term (called fitting loss afterwards) evaluates the closeness between a word's coded vector representation and its \"true\" representation w, which is the general goal of sparse coding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "The second term is an 1 regularizer, which encourages a sparse solution. The last two regularization terms favor a solution that sums to 1 and that is nonnegative, respectively. The nonnegative regularizer is applied as in He et al. (2012) due to psychological interpretation concerns.", "cite_spans": [ { "start": 223, "end": 239, "text": "He et al. (2012)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "It is difficult to determine the hyperparameters \u03b1, \u03b2, and \u03b3. Therefore we perform several tricks. First, we drop the last term in the problem (4), but clip each element in x so that all the sparse codes are nonnegative during each update of training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "Second, we re-parametrize \u03b1 and \u03b2 by balancing the fitting loss and regularization terms dynamically during training. Concretely, we solve the following optimization problem, which is slightly different but closely related to the conceptual objective (4):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "min x L(x) + \u03b1 t R 1 (x) + \u03b2 t R 2 (x),", "eq_num": "(5)" } ], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "where L(x) = U x \u2212 w 2 2 , R 1 (x) = x 1 , and R 2 (x) = |1 x\u22121|. \u03b1 t and \u03b2 t are adaptive parameters that are resolved during training time. Suppose x t is the value we obtain after the update of the t-th step, we expect the importance of fitness and regularization remain unchanged during training. This is equivalent to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 t R 1 (x t ) L(x t ) = w \u03b1 \u2261 const,", "eq_num": "(6)" } ], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b2 t R 2 (x t ) L(x t ) = w \u03b2 \u2261 const.", "eq_num": "(7)" } ], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "or", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "\u03b1 t = L(x t ) R 1 (x t ) w \u03b1 and \u03b2 t = L(x t ) R 2 (x t ) w \u03b2 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "where w \u03b1 and w \u03b2 are the ratios between the regularization loss and the fitting loss. They are much easier to specify than \u03b1 or \u03b2 in the problem (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "We have two remarks as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "\u2022 To learn the sparse codes, we first train the \"true\" embeddings by word2vec 2 for both common words and rare words. However, these true embeddings are slacked during our language modeling. \u2022 As the codes are pre-computed and remain unchanged during language modeling, they are not tunable parameters of our neural model. Considering the learned sparse codes, we need only 4-8 values for each word on average, as the codes contain 0.05-0.1% nonzero values, which are almost negligible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparse Representations of Words", "sec_num": "3.1" }, { "text": "One main source of LM parameters is the Embedding subnet, which takes a list of words (history/context) as input, and outputs dense, lowdimensional vector representations of the words. We leverage the sparse representation of words mentioned above to construct a compressed Embedding subnet, where the number of parameters is independent of the vocabulary size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Embedding Subnet", "sec_num": "3.2" }, { "text": "By solving the optimization problem (5) for each word, we obtain a non-negative sparse code x \u2208 R B for each word, indicating the degree to which the word is related to common words in B. Then the embedding of a word is given b\u0177 w = U x.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Embedding Subnet", "sec_num": "3.2" }, { "text": "We would like to point out that the embedding of a word\u0175 is not sparse because U is a dense matrix, which serves as a shared parameter of learning all words' vector representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Embedding Subnet", "sec_num": "3.2" }, { "text": "Another main source of parameters is the Prediction subnet. As Table 1 shows, the output layer contains V target-word weight vectors and biases; the number increases with the vocabulary size. To compress this part of a neural LM, we propose a weight-sharing method that uses words' sparse representations again. Similar to the compression of word embeddings, we define a base set of weight vectors, and use them to represent the rest weights by sparse linear combinations. Without loss of generality, we let D = W :,1:B be the output weights of B base target words, and c = b 1:B be bias of the B target words. 3 The goal is to use D and c to represent W and b. However, as the values of W and b are unknown before the training of LM, we cannot obtain their sparse codes in advance.", "cite_spans": [ { "start": 611, "end": 612, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "We claim that it is reasonable to share the same set of sparse codes to represent word vectors in Embedding and the output weights in the Prediction subnet. In a given corpus, an occurrence of a word is always companied by its context. The co-occurrence statistics about a word or corresponding context are the same. As both word embedding and context vectors capture these co-occurrence statistics (Levy and Goldberg, 2014), we can expect that context vectors share the same internal structure as embeddings. Moreover, for a fine-trained network, given any word w and its context h, the output layer's weight vector corresponding to w should specify a large inner-product score for the context h; thus these context vectors should approximate the weight vector of w. Therefore, word embeddings and the output weight vectors should share the same internal structures and it is plausible to use a same set of sparse representations for both words and target-word weight vectors. As we shall show in Section 4, our treatment of compressing the Prediction subnet does make sense and achieves high performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "Formally, the i-th output weight vector is estimated by\u0174", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i = Dx i ,", "eq_num": "(8)" } ], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "3 W:,1:B is the first B columns of W . We apply NCE to estimate the parameters of the Prediction sub-network (dashed round rectangle). The SpUnnrmProb layer outputs a sparse, unnormalized probability of the next word. By \"sparsity,\" we mean that, in NCE, the probability is computed for only the \"true\" next word (red) and a few generated negative samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "The biases can also be compressed a\u015d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "b i = cx i .", "eq_num": "(9)" } ], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "where x i is the sparse representation of the i-th word. (It is shared in the compression of weights and biases.) In the above model, we have managed to compressed a language model whose number of parameters is irrelevant to the vocabulary size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "To better estimate a \"prior\" distribution of words, we may alternatively assign an independent bias to each word, i.e., b is not compressed. In this variant, the number of model parameters grows very slowly and is also negligible because each word needs only one extra parameter. Experimental results show that by not compressing the bias vector, we can even improve the performance while compressing LMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Compression for the Prediction Subnet", "sec_num": "3.3" }, { "text": "We adopt the noise-contrastive estimation (NCE) method to train our model. Compared with the maximum likelihood estimation of softmax, NCE reduces computational complexity to a large degree. We further propose the ZRegression mechanism to stablize training. NCE generates a few negative samples for each positive data sample. During training, we only need to compute the unnormalized probability of these positive and negative samples. Interested readers are referred to (Gutmann and Hyv\u00e4rinen, 2012) for more information.", "cite_spans": [ { "start": 471, "end": 500, "text": "(Gutmann and Hyv\u00e4rinen, 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "Formally, the estimated probability of the word w i with history/context h is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (w|h; \u03b8) = 1 Z h P 0 (w i |h; \u03b8) = 1 Z h exp(s(w i , h; \u03b8)),", "eq_num": "(10)" } ], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "where \u03b8 is the parameters and Z h is a contextdependent normalization factor. P 0 (w i |h; \u03b8) is the unnormalized probability of the w (given by the SpUnnrmProb layer in Figure 2 ). The NCE algorithm suggests to take Z h as parameters to optimize along with \u03b8, but it is intractable for context with variable lengths or large sizes in language modeling. Following Mnih and Teh (2012) , we set Z h = 1 for all h in the base model (without ZRegression).", "cite_spans": [ { "start": 364, "end": 383, "text": "Mnih and Teh (2012)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 170, "end": 178, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "The objective for each occurrence of context/history h is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "J(\u03b8|h) = log P (w i |h; \u03b8) P (w i |h; \u03b8) + kP n (w i ) + k j=1 log kP n (w j ) P (w j |h; \u03b8) + kP n (w j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": ",", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "where P n (w) is the probability of drawing a negative sample w; k is the number of negative samples that we draw for each positive sample. The overall objective of NCE is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "J(\u03b8) = E h [J(\u03b8|h)] \u2248 1 M M i=1 J(\u03b8|h i ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "where h i is an occurrence of the context and M is the total number of context occurrences. Although setting Z h to 1 generally works well in our experiment, we find that in certain scenarios, the model is unstable. Experiments show that when the true normalization factor is far away from 1, the cost function may vibrate. To comply with NCE in general, we therefore propose a ZRegression layer to predict the normalization constant Z h dependent on h, instead of treating it as a constant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "The regression layer is computed by where W Z \u2208 R C and b Z \u2208 R are weights and bias for ZRegression. Hence, the estimated probability by NCE with ZRegression is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "Z \u22121 h = exp(W Z h + b Z ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "P (w|h) = exp(s(h, w)) \u2022 exp(W Z h + b Z ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "Note that the ZRegression layer does not guarantee normalized probabilities. During validation and testing, we explicitly normalize the probabilities by Equation (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Noise-Contrastive Estimation with ZRegression", "sec_num": "3.4" }, { "text": "In this part, we first describe our dataset in Subsection 4.1. We evaluate our learned sparse codes of rare words in Subsection 4.2 and the compressed language model in Subsection 4.3. Subsection 4.4 provides in-depth analysis of the ZRegression mechanism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "We used the freely available Wikipedia 4 dump (2014) as our dataset. We extracted plain sentences from the dump and removed all markups. We further performed several steps of preprocessing such as text normalization, sentence splitting, and tokenization. Sentences were randomly shuffled, so that no information across sentences could be used, i.e., we did not consider cached language models. The resulting corpus contains about 1.6 billion running words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "The corpus was split into three parts for training, validation, and testing. As it is typically timeconsuming to train neural networks, we sampled a subset of 100 million running words to train neural LMs, but the full training set was used to train the backoff n-gram models. We chose hyperparameters by the validation set and reported model performance on the test set. Table 2 presents some statistics of our dataset.", "cite_spans": [], "ref_spans": [ { "start": 372, "end": 379, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "To obtain words' sparse codes, we chose 8k common words as the \"dictionary,\" i.e., B = 8000. Figure 3 : The sparse representations of selected words. The x-axis is the dictionary of 8k common words; the y-axis is the coefficient of sparse coding. Note that algorithm, secret, and debate are common words, each being coded by itself with a coefficient of 1.", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Analysis of Sparse Codes", "sec_num": "4.2" }, { "text": "We had 2k-42k uncommon words in different settings. We first pretrained word embeddings of both rare and common words, and obtained 200d vectors U and w in Equation 5. The dimension was specified in advance and not tuned. As there is no analytic solution to the objective, we optimized it by Adam (Kingma and Ba, 2014), which is a gradient-based method. To filter out small coefficients around zero, we simply set a value to 0 if it is less than 0.015 \u2022 max{v \u2208 x}. w \u03b1 in Equation (6) was set to 1 because we deemed fitting loss and sparsity penalty are equally important. We set w \u03b2 in Equation (7) to 0.1, and this hyperparameter is insensitive. Figure 3 plots the sparse codes of a few selected words. As we see, algorithm, secret, and debate are common words, and each is (sparsely) coded by itself with a coefficient of 1. We further notice that a rare word like algorithms has a sparse representation with only a few non-zero coefficient.", "cite_spans": [], "ref_spans": [ { "start": 649, "end": 657, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Analysis of Sparse Codes", "sec_num": "4.2" }, { "text": "Moreover, the coefficient in the code of algorithms-corresponding to the base word algorithm-is large (\u223c 0.6), showing that the words algorithm and algorithms are similar. Such phenomena are also observed with secret and debate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis of Sparse Codes", "sec_num": "4.2" }, { "text": "The qualitative analysis demonstrates that our approach can indeed learn a sparse code of a word, and that the codes are meaningful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis of Sparse Codes", "sec_num": "4.2" }, { "text": "We then used the pre-computed sparse codes to compress neural LMs, which provides quantitative analysis of the learned sparse representations of words. We take perplexity as the performance measurement of a language model, which is de-fined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Analysis of Compressed Language Models", "sec_num": "4.3" }, { "text": "PPL = 2 \u2212 1 N N i=1 log 2 p(w i |h i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Analysis of Compressed Language Models", "sec_num": "4.3" }, { "text": "where N is the number of running words in the test corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative Analysis of Compressed Language Models", "sec_num": "4.3" }, { "text": "We leveraged LSTM-RNN as the Encoding subnet, which is a prevailing class of neural networks for language modeling (Sundermeyer et al., 2015; Karpathy et al., 2015) . The hidden layer was 200d.", "cite_spans": [ { "start": 115, "end": 141, "text": "(Sundermeyer et al., 2015;", "ref_id": "BIBREF29" }, { "start": 142, "end": 164, "text": "Karpathy et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "4.3.1" }, { "text": "We used the Adam algorithm to train our neural models. The learning rate was chosen by validation from {0.001, 0.002, 0.004, 0.006, 0.008}. Parameters were updated with a mini-batch size of 256 words. We trained neural LMs by NCE, where we generated 50 negative samples for each positive data sample in the corpus. All our model variants and baselines were trained with the same pre-defined hyperparameters or tuned over a same candidate set; thus our comparison is fair. We list our compressed LMs and competing methods as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Settings", "sec_num": "4.3.1" }, { "text": "\u2022 KN3. We adopted the modified Kneser-Ney smoothing technique to train a 3-gram LM; we used the SRILM toolkit (Stolcke and others, 2002) in out experiment. \u2022 LBL5. A Log-BiLinear model introduced in Mnih and Hinton (2007) . We used 5 preceding words as context. \u2022 LSTM-s. A standard LSTM-RNN language model which is applied in Sundermeyer et al. (2015) and Karpathy et al. (2015) . We implemented the LM ourselves based on Theano (Theano Development Team, 2016) and also used NCE for training. \u2022 LSTM-z. An LSTM-RNN enhanced with the ZRegression mechanism described in Section 3.4. \u2022 LSTM-z,wb. Based on LSTM-z, we compressed word embeddings in Embedding and the output weights and biases in Prediction. \u2022 LSTM-z,w. In this variant, we did not compress the bias term in the output layer. For each word in C, we assigned an independent bias parameter. Tables 3 shows the model as well as the backoff 3-gram LM, even if the 3-gram LM is trained on a much larger corpus with 1.6 billion words. The ZRegression mechanism improves the performance of LSTM to a large extent, which is unexpected. Subsection 4.4 will provide more in-depth analysis.", "cite_spans": [ { "start": 110, "end": 136, "text": "(Stolcke and others, 2002)", "ref_id": null }, { "start": 199, "end": 221, "text": "Mnih and Hinton (2007)", "ref_id": "BIBREF21" }, { "start": 327, "end": 352, "text": "Sundermeyer et al. (2015)", "ref_id": "BIBREF29" }, { "start": 357, "end": 379, "text": "Karpathy et al. (2015)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 851, "end": 869, "text": "Tables 3 shows the", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Settings", "sec_num": "4.3.1" }, { "text": "Regarding the compression method proposed in this paper, we notice that LSTM-z,wb and LSTM-z,w yield similar performance to LSTM-z. In particular, LSTM-z,w outperforms LSTM-z in all scenarios of different vocabulary sizes. Moreover, both LSTM-z,wb and LSTM-z,w can reduce the memory consumption by up to 80% (Table 4) .", "cite_spans": [], "ref_spans": [ { "start": 308, "end": 317, "text": "(Table 4)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Performance", "sec_num": "4.3.2" }, { "text": "We further plot in Figure 4 the model performance (lines) and memory consumption (bars) in a fine-grained granularity of vocabulary sizes. We see such a tendency that compressed LMs (LSTMz,wb and LSTM-z,w, yellow and red lines) are generally better than LSTM-z (black line) when we have a small vocabulary. However, LSTMz,wb is slightly worse than LSTM-z if the vocabulary size is greater than, say, 20k. The LSTM-z,w remains comparable to LSTM-z as the vocabulary grows.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 27, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Performance", "sec_num": "4.3.2" }, { "text": "To explain this phenomenon, we may imagine that the compression using sparse codes has two effects: it loses information, but it also enables more accurate estimation of parameters especially for rare words. When the second factor dominates, we can reasonably expect a high performance of the compressed LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "4.3.2" }, { "text": "From the bars in Figure 4 , we observe that traditional LMs have a parameter space growing linearly with the vocabulary size. But the number of parameters in our compressed models does not increase-or strictly speaking, increases at an extremely small rate-with vocabulary.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 25, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Performance", "sec_num": "4.3.2" }, { "text": "These experiments show that our method can largely reduce the parameter space with even performance improvement. The results also verify that the sparse codes induced by our model indeed capture meaningful semantics and are potentially useful for other downstream tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance", "sec_num": "4.3.2" }, { "text": "We next analyze the effect of ZRegression for NCE training. As shown in Figure 5a , the training process becomes unstable after processing 70% of the dataset: the training loss vibrates significantly, whereas the test loss increases.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 81, "text": "Figure 5a", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Effect of ZRegression", "sec_num": "4.4" }, { "text": "We find a strong correlation between unstableness and the Z h factor in Equation (10), i.e., the sum of unnormalized probability (Figure 5b ). Theoretical analysis shows that the Z h factor tends to be self-normalized even though it is not forced to (Gutmann and Hyv\u00e4rinen, 2012) . However, problems would occur, should it fail.", "cite_spans": [ { "start": 250, "end": 279, "text": "(Gutmann and Hyv\u00e4rinen, 2012)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 129, "end": 139, "text": "(Figure 5b", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Effect of ZRegression", "sec_num": "4.4" }, { "text": "In traditional methods, NCE jointly estimates normalization factor Z and model parameters (Gutmann and Hyv\u00e4rinen, 2012) . For language modeling, Z h dependents on context h. Mnih and Teh (2012) propose to estimate a separate Z h based on two history words (analogous to 3-gram), but their approach hardly scales to RNNs because of the exponential number of different combinations of history words.", "cite_spans": [ { "start": 90, "end": 119, "text": "(Gutmann and Hyv\u00e4rinen, 2012)", "ref_id": "BIBREF8" }, { "start": 174, "end": 193, "text": "Mnih and Teh (2012)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Effect of ZRegression", "sec_num": "4.4" }, { "text": "We propose the ZRegression mechanism in Section 3.4, which can estimate the Z h factor well ( Figure 5d ) based on the history vector h. In this way, we manage to stabilize the training process ( Figure 5c ) and improve the performance by Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 94, "end": 103, "text": "Figure 5d", "ref_id": "FIGREF5" }, { "start": 196, "end": 205, "text": "Figure 5c", "ref_id": "FIGREF5" }, { "start": 239, "end": 246, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Effect of ZRegression", "sec_num": "4.4" }, { "text": "It should be mentioned that ZRegression is not specific to model compression and is generally applicable to other neural LMs trained by NCE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effect of ZRegression", "sec_num": "4.4" }, { "text": "In this paper, we proposed an approach to represent rare words by sparse linear combinations of common ones. Based on such combinations, we managed to compress an LSTM language model (LM), where memory does not increase with the vocabulary size except a bias and a sparse code for each word. Our experimental results also show that the compressed LM has yielded a better performance than the uncompressed base LM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://code.google.com/archive/p/word2vec", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://en.wikipedia.org", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "When and why are log-linear models self-normalizing", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Annual Meeting of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "244--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas and Dan Klein. 2014. When and why are log-linear models self-normalizing. In Proceed- ings of the Annual Meeting of the North American Chapter of the Association for Computational Lin- guistics, pages 244-249.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Quick training of probabilistic neural nets by importance sampling", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Jean-S\u00e9bastien", "middle": [], "last": "Sen\u00e9cal", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio and Jean-S\u00e9bastien Sen\u00e9cal. 2003. Quick training of probabilistic neural nets by im- portance sampling. In Proceedings of the Ninth In- ternational Workshop on Artificial Intelligence and Statistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A neural probabilistic language model", "authors": [ { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "R\u00e9jean", "middle": [], "last": "Ducharme", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Jauvin", "suffix": "" } ], "year": 2003, "venue": "The Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1137--1155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. The Journal of Machine Learning Re- search, 3:1137-1155.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Model compression", "authors": [ { "first": "Cristian", "middle": [], "last": "Bucilu\u01ce", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "Alexandru", "middle": [], "last": "Niculescu-Mizil", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "535--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cristian Bucilu\u01ce, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, pages 535-541.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Strategies for training large vocabulary neural language models", "authors": [ { "first": "Welin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1512.04906" ] }, "num": null, "urls": [], "raw_text": "Welin Chen, David Grangier, and Michael Auli. 2015. Strategies for training large vocabulary neural lan- guage models. arXiv preprint arXiv:1512.04906.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Fast and robust neural network joint models for statistical machine translation", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Rabih", "middle": [], "last": "Zbib", "suffix": "" }, { "first": "Zhongqiang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lamar", "suffix": "" }, { "first": "M", "middle": [], "last": "Richard", "suffix": "" }, { "first": "John", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "", "middle": [], "last": "Makhoul", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1370--1380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52rd Annual Meeting of the Association for Computational Linguistics, pages 1370-1380.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Sparse overcomplete word vector representations", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1491--1500", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah A. Smith. 2015. Sparse overcom- plete word vector representations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 1491-1500.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Compressing deep convolutional networks using vector quantization", "authors": [ { "first": "Yunchao", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Lubomir", "middle": [], "last": "Bourdev", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6115" ] }, "num": null, "urls": [], "raw_text": "Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. 2014. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Noisecontrastive estimation of unnormalized statistical models, with applications to natural image statistics", "authors": [ { "first": "Michael", "middle": [], "last": "Gutmann", "suffix": "" }, { "first": "Aapo", "middle": [], "last": "Hyv\u00e4rinen", "suffix": "" } ], "year": 2012, "venue": "The Journal of Machine Learning Research", "volume": "13", "issue": "1", "pages": "307--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gutmann and Aapo Hyv\u00e4rinen. 2012. Noise- contrastive estimation of unnormalized statistical models, with applications to natural image statis- tics. The Journal of Machine Learning Research, 13(1):307-361.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Document summarization based on data reconstruction", "authors": [ { "first": "Zhanying", "middle": [], "last": "He", "suffix": "" }, { "first": "Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Bu", "suffix": "" }, { "first": "Can", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lijun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Deng", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Xiaofei", "middle": [], "last": "He", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 26th AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "620--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhanying He, Chun Chen, Jiajun Bu, Can Wang, Lijun Zhang, Deng Cai, and Xiaofei He. 2012. Document summarization based on data reconstruction. In Pro- ceedings of the 26th AAAI Conference on Artificial Intelligence, pages 620-626.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Distilling the knowledge in a neural network", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1503.02531" ] }, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Speeding up convolutional neural networks with low rank expansions", "authors": [ { "first": "Max", "middle": [], "last": "Jaderberg", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Vedaldi", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the British Machine Vision Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Max Jaderberg, Andrea Vedaldi, and Andrew Zisser- man. 2014. Speeding up convolutional neural net- works with low rank expansions. In Proceedings of the British Machine Vision Conference.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "On using very large target vocabulary for neural machine translation", "authors": [ { "first": "S\u00e9bastien", "middle": [], "last": "Jean", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Memisevic", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.2007" ] }, "num": null, "urls": [], "raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On using very large tar- get vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Speech and Language Processing", "authors": [ { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Jurafsky and James H. Martin. 2014. Speech and Language Processing. Pearson.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Visualizing and understanding recurrent networks", "authors": [ { "first": "Andrej", "middle": [], "last": "Karpathy", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Fei-Fei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.02078" ] }, "num": null, "urls": [], "raw_text": "Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Efficient sparse coding algorithms", "authors": [ { "first": "Honglak", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Battle", "suffix": "" }, { "first": "Rajat", "middle": [], "last": "Raina", "suffix": "" }, { "first": "Andrew Y", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2006, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "801--808", "other_ids": {}, "num": null, "urls": [], "raw_text": "Honglak Lee, Alexis Battle, Rajat Raina, and An- drew Y Ng. 2006. Efficient sparse coding algo- rithms. In Advances in Neural Information Process- ing Systems, pages 801-808.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Linguistic regularities in sparse and explicit word representations", "authors": [ { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "171--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omer Levy and Yoav Goldberg. 2014. Linguistic reg- ularities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Nat- ural Language Learning, pages 171-180.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" } ], "year": 2010, "venue": "IN-TERSPEECH", "volume": "", "issue": "", "pages": "1045--1048", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In IN- TERSPEECH, pages 1045-1048.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Strategies for training large scale neural network language models", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Deoras", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Lukas", "middle": [], "last": "Burget", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "196--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Anoop Deoras, Daniel Povey, Lukas Burget, and Jan Cernock\u00fd. 2011. Strategies for training large scale neural network language models. In Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding, pages 196- 201.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Three new graphical models for statistical language modelling", "authors": [ { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 24th International Conference on Machine learning", "volume": "", "issue": "", "pages": "641--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th International Conference on Machine learning, pages 641-648.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A fast and simple algorithm for training neural probabilistic language models", "authors": [ { "first": "Andriy", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Yee-Whye", "middle": [], "last": "Teh", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1206.6426" ] }, "num": null, "urls": [], "raw_text": "Andriy Mnih and Yee-Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. arXiv preprint arXiv:1206.6426.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hierarchical probabilistic neural network language model", "authors": [ { "first": "Fr\u00e9deric", "middle": [], "last": "Morin", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the International Workshop on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "246--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fr\u00e9deric Morin and Yoshua Bengio. 2005. Hierarchi- cal probabilistic neural network language model. In Proceedings of the International Workshop on Arti- ficial Intelligence and Statistics, pages 246-252.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Distilling word embeddings: An encoding approach", "authors": [ { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.04488" ] }, "num": null, "urls": [], "raw_text": "Lili Mou, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. 2015a. Distilling word embeddings: An encoding approach. arXiv preprint arXiv:1506.04488.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Backward and forward language modeling for constrained natural language generation", "authors": [ { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1512.06612" ] }, "num": null, "urls": [], "raw_text": "Lili Mou, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2015b. Backward and forward language modeling for constrained natural language generation. arXiv preprint arXiv:1512.06612.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "Sumit", "middle": [], "last": "Alexander M Rush", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "379--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A neural network approach to context-sensitive generation of conversational responses", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "196--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- eration of conversational responses. In Proceed- ings of the 2015 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196-205.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "SRILM-An extensible language modeling toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "901--904", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Stolcke et al. 2002. SRILM-An extensi- ble language modeling toolkit. In INTERSPEECH, pages 901-904.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "From feedforward to recurrent LSTM neural networks for language modeling", "authors": [ { "first": "Martin", "middle": [], "last": "Sundermeyer", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" } ], "year": 2015, "venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing", "volume": "23", "issue": "3", "pages": "517--529", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Sundermeyer, Hermann Ney, and Ralf Schl\u00fcter. 2015. From feedforward to recurrent LSTM neural networks for language modeling. IEEE/ACM Trans- actions on Audio, Speech and Language Processing, 23(3):517-529.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Theano: A Python framework for fast computation of mathematical expressions", "authors": [], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.02688" ] }, "num": null, "urls": [], "raw_text": "Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical ex- pressions. arXiv preprint arXiv:1605.02688.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Robust sparse coding for face recognition", "authors": [ { "first": "Meng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Yang", "suffix": "" }, { "first": "David", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "625--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meng Yang, Lei Zhang, Jian Yang, and David Zhang. 2011. Robust sparse coding for face recognition. In Proceedings of the 2011 IEEE Conference on Com- puter Vision and Pattern Recognition, pages 625- 632.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "The architecture of a neural networkbased language model." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "and Hinton et al. (2015) use a well-trained large network to guide the training of a small network for model compression. Jaderberg et al. (2014) compress neural models by matrix factorization, Gong et al. (2014) by quantization. In NLP, Mou et al. (2015a) learn an embedding subspace by supervised training. Our work resembles little, if any, to the above methods as we compress embeddings and output weights using sparse word representations. Existing model Sub-nets RNN-LSTM FFNN" }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Compressing the output of neural LM." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Fine-grained plot of performance (perplexity) and memory consumption (including sparse codes) versus the vocabulary size." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "(a) Training/test loss vs. training time w/o ZRegression. (b) The validation perplexity and normalization factor Z h w/o ZRegression. (c) Training loss vs. training time w/ ZRegression of different runs. (d) The validation perplexity and normalization factor Z h w/ ZRegression." }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "Analysis of ZRegression. a large margin, as has shown in" }, "TABREF0": { "text": "Number of parameters in different neural network-based LMs. E: embedding dimension; C: context dimension; V : vocabulary size.", "content": "", "num": null, "type_str": "table", "html": null }, "TABREF2": { "text": "Statistics of our corpus.", "content": "
", "num": null, "type_str": "table", "html": null }, "TABREF4": { "text": "Perplexity of our compressed language models and baselines. \u2020 Trained with the full corpus of 1.6 billion running words.", "content": "
Vocabulary10k22k36k50k
LSTM-z,w17.76 59.28 73.42 79.75
LSTM-z,wb 17.80 59.44 73.61 79.95
", "num": null, "type_str": "table", "html": null }, "TABREF5": { "text": "Memory reduction (%) by our proposed methods in comparison with the uncompressed model LSTM-z. The memory of sparse codes are included.", "content": "", "num": null, "type_str": "table", "html": null } } } }