{ "paper_id": "P18-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:42:06.810510Z" }, "title": "Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context", "authors": [ { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "urvashik@stanford.edu" }, { "first": "He", "middle": [], "last": "He", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "hehe@stanford.edu" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "pengqi@stanford.edu" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "jurafsky@stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We know very little about how neural language models (LM) use prior linguistic context. In this paper, we investigate the role of context in an LSTM LM, through ablation studies. Specifically, we analyze the increase in perplexity when prior context words are shuffled, replaced, or dropped. On two standard datasets, Penn Treebank and WikiText-2, we find that the model is capable of using about 200 tokens of context on average, but sharply distinguishes nearby context (recent 50 tokens) from the distant history. The model is highly sensitive to the order of words within the most recent sentence, but ignores word order in the long-range context (beyond 50 tokens), suggesting the distant past is modeled only as a rough semantic field or topic. We further find that the neural caching model (Grave et al., 2017b) especially helps the LSTM to copy words from within this distant context. Overall, our analysis not only provides a better understanding of how neural LMs use their context, but also sheds light on recent success from cache-based models.", "pdf_parse": { "paper_id": "P18-1027", "_pdf_hash": "", "abstract": [ { "text": "We know very little about how neural language models (LM) use prior linguistic context. In this paper, we investigate the role of context in an LSTM LM, through ablation studies. Specifically, we analyze the increase in perplexity when prior context words are shuffled, replaced, or dropped. On two standard datasets, Penn Treebank and WikiText-2, we find that the model is capable of using about 200 tokens of context on average, but sharply distinguishes nearby context (recent 50 tokens) from the distant history. The model is highly sensitive to the order of words within the most recent sentence, but ignores word order in the long-range context (beyond 50 tokens), suggesting the distant past is modeled only as a rough semantic field or topic. We further find that the neural caching model (Grave et al., 2017b) especially helps the LSTM to copy words from within this distant context. Overall, our analysis not only provides a better understanding of how neural LMs use their context, but also sheds light on recent success from cache-based models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Language models are an important component of natural language generation tasks, such as machine translation and summarization. They use context (a sequence of words) to estimate a probability distribution of the upcoming word. For several years now, neural language models (NLMs) (Graves, 2013; Jozefowicz et al., 2016; Grave et al., 2017a; Dauphin et al., 2017; Melis et al., 2018; Yang et al., 2018) have consistently outperformed classical n-gram models, an im-provement often attributed to their ability to model long-range dependencies in faraway context. Yet, how these NLMs use the context is largely unexplained.", "cite_spans": [ { "start": 281, "end": 295, "text": "(Graves, 2013;", "ref_id": null }, { "start": 296, "end": 320, "text": "Jozefowicz et al., 2016;", "ref_id": "BIBREF8" }, { "start": 321, "end": 341, "text": "Grave et al., 2017a;", "ref_id": "BIBREF7" }, { "start": 342, "end": 363, "text": "Dauphin et al., 2017;", "ref_id": "BIBREF4" }, { "start": 364, "end": 383, "text": "Melis et al., 2018;", "ref_id": "BIBREF14" }, { "start": 384, "end": 402, "text": "Yang et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent studies have begun to shed light on the information encoded by Long Short-Term Memory (LSTM) networks. They can remember sentence lengths, word identity, and word order (Adi et al., 2017) , can capture some syntactic structures such as subject-verb agreement (Linzen et al., 2016) , and can model certain kinds of semantic compositionality such as negation and intensification (Li et al., 2016) .", "cite_spans": [ { "start": 176, "end": 194, "text": "(Adi et al., 2017)", "ref_id": "BIBREF0" }, { "start": 266, "end": 287, "text": "(Linzen et al., 2016)", "ref_id": "BIBREF11" }, { "start": 384, "end": 401, "text": "(Li et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, all of the previous work studies LSTMs at the sentence level, even though they can potentially encode longer context. Our goal is to complement the prior work to provide a richer understanding of the role of context, in particular, long-range context beyond a sentence. We aim to answer the following questions: (i) How much context is used by NLMs, in terms of the number of tokens? (ii) Within this range, are nearby and long-range contexts represented differently? (iii) How do copy mechanisms help the model use different regions of context?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We investigate these questions via ablation studies on a standard LSTM language model (Merity et al., 2018) on two benchmark language modeling datasets: Penn Treebank and WikiText-2. Given a pretrained language model, we perturb the prior context in various ways at test time, to study how much the perturbed information affects model performance. Specifically, we alter the context length to study how many tokens are used, permute tokens to see if LSTMs care about word order in both local and global contexts, and drop and replace target words to test the copying abilities of LSTMs with and without an external copy mechanism, such as the neural cache (Grave et al., 2017b) . The cache operates by first recording tar-get words and their context representations seen in the history, and then encouraging the model to copy a word from the past when the current context representation matches that word's recorded context vector.", "cite_spans": [ { "start": 86, "end": 107, "text": "(Merity et al., 2018)", "ref_id": "BIBREF15" }, { "start": 656, "end": 677, "text": "(Grave et al., 2017b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We find that the LSTM is capable of using about 200 tokens of context on average, with no observable differences from changing the hyperparameter settings. Within this context range, word order is only relevant within the 20 most recent tokens or about a sentence. In the long-range context, order has almost no effect on performance, suggesting that the model maintains a high-level, rough semantic representation of faraway words. Finally, we find that LSTMs can regenerate some words seen in the nearby context, but heavily rely on the cache to help them copy words from the long-range context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Language models assign probabilities to sequences of words. In practice, the probability can be factorized using the chain rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "2" }, { "text": "P (w 1 , . . . , w t ) = t Y i=1 P (w i |w i 1 , . . . , w 1 ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "2" }, { "text": "and language models compute the conditional probability of a target word w t given its preceding context, w 1 , . . . , w t 1 . Language models are trained to minimize the negative log likelihood of the training corpus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "2" }, { "text": "NLL = 1 T T X t=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "2" }, { "text": "log P (w t |w t 1 , . . . , w 1 ), and the model's performance is usually evaluated by perplexity (PP) on a held-out set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "2" }, { "text": "PP = exp(NLL).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "2" }, { "text": "When testing the effect of ablations, we focus on comparing differences in the language model's losses (NLL) on the dev set, which is equivalent to relative improvements in perplexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Modeling", "sec_num": "2" }, { "text": "Our goal is to investigate the effect of contextual features such as the length of context, word order and more, on LSTM performance. Thus, we use ablation analysis, during evaluation, to measure changes in model performance in the absence of certain contextual information. Typically, when testing the language model on a held-out sequence of words, all tokens prior to the target word are fed to the model; we call this the infinite-context setting. In this study, we observe the change in perplexity or NLL when the model is fed a perturbed context (w t 1 , . . . , w 1 ), at test time. refers to the perturbation function, and we experiment with perturbations such as dropping tokens, shuffling/reversing tokens, and replacing tokens with other words from the vocabulary. 1 It is important to note that we do not train the model with these perturbations. This is because the aim is to start with an LSTM that has been trained in the standard fashion, and discover how much context it uses and which features in nearby vs. long-range context are important. Hence, the mismatch in training and test is a necessary part of experiment design, and all measured losses are upper bounds which would likely be lower, were the model also trained to handle such perturbations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "We use a standard LSTM language model, trained and finetuned using the Averaging SGD optimizer (Merity et al., 2018) . 2 We also augment the model with a cache only for Section 6.2, in order to investigate why an external copy mechanism is helpful. A short description of the architecture and a detailed list of hyperparameters is listed in Appendix A, and we refer the reader to the original paper for additional details.", "cite_spans": [ { "start": 95, "end": 116, "text": "(Merity et al., 2018)", "ref_id": "BIBREF15" }, { "start": 119, "end": 120, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "We analyze two datasets commonly used for language modeling, Penn Treebank (PTB) (Marcus et al., 1993; Mikolov et al., 2010) and Wikitext-2 (Wiki) (Merity et al., 2017) . PTB consists of Wall Street Journal news articles with 0.9M tokens for training and a 10K vocabulary. Wiki is a larger and more diverse dataset, containing Wikipedia articles across many topics with 2.1M tokens for training and a 33K vocabulary. Additional dataset statistics are provided in Ta-ble 1. In this paper, we present results only on the dev sets, in order to avoid revealing details about the test sets. However, we have confirmed that all results are consistent with those on the test sets. In addition, for all experiments we report averaged results from three models trained with different random seeds. Some of the figures provided contain trends from only one of the two datasets and the corresponding figures for the other dataset are provided in Appendix B.", "cite_spans": [ { "start": 81, "end": 102, "text": "(Marcus et al., 1993;", "ref_id": "BIBREF13" }, { "start": 103, "end": 124, "text": "Mikolov et al., 2010)", "ref_id": "BIBREF17" }, { "start": 147, "end": 168, "text": "(Merity et al., 2017)", "ref_id": null } ], "ref_spans": [ { "start": 463, "end": 472, "text": "Ta-ble 1.", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "LSTMs are designed to capture long-range dependencies in sequences (Hochreiter and Schmidhuber, 1997) . In practice, LSTM language models are provided an infinite amount of prior context, which is as long as the test sequence goes. However, it is unclear how much of this history has a direct impact on model performance. In this section, we investigate how many tokens of context achieve a similar loss (or 1-2% difference in model perplexity) to providing the model infinite context. We consider this the effective context size.", "cite_spans": [ { "start": 67, "end": 101, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "How much context is used?", "sec_num": "4" }, { "text": "LSTM language models have an effective context size of about 200 tokens on average. We determine the effective context size by varying the number of tokens fed to the model. In particular, at test time, we feed the model the most recent n tokens:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How much context is used?", "sec_num": "4" }, { "text": "truncate (w t 1 , . . . , w 1 ) = (w t 1 , . . . , w t n ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How much context is used?", "sec_num": "4" }, { "text": "(1) where n > 0 and all tokens farther away from the target w t are dropped. 3 We compare the dev loss (NLL) from truncated context, to that of the infinite-context setting where all previous words are fed to the model. The resulting increase in loss indicates how important the dropped tokens are for the model. Figure 1a shows that the difference in dev loss, between truncated-and infinite-context variants of the test setting, gradually diminishes as we increase n from 5 tokens to 1000 tokens. In particular, we only see a 1% increase in perplexity as we move beyond a context of 150 tokens on PTB and 250 tokens on Wiki. Hence, we provide empirical evidence to show that LSTM language models do, in fact, model long-range dependencies, without help from extra context vectors or caches.", "cite_spans": [ { "start": 77, "end": 78, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 313, "end": 322, "text": "Figure 1a", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "How much context is used?", "sec_num": "4" }, { "text": "Changing hyperparameters does not change the effective context size. NLM performance has been shown to be sensitive to hyperparameters such as the dropout rate and model size (Melis et al., 2018) . To investigate if these hyperparameters affect the effective context size as well, we train separate models by varying the following hyperparameters one at a time: (1) number of timesteps for truncated back-propogation (2) dropout rate, (3) model size (hidden state size, number of layers, and word embedding size). In Figure 1b , we show that while different hyperparameter settings result in different perplexities in the infinite-context setting, the trend of how perplexity changes as we reduce the context size remains the same.", "cite_spans": [ { "start": 175, "end": 195, "text": "(Melis et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 517, "end": 526, "text": "Figure 1b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "How much context is used?", "sec_num": "4" }, { "text": "The effective context size determined in the previous section is aggregated over the entire corpus, which ignores the type of the upcoming word. Boyd-Graber and Blei (2009) have previously investigated the differences in context used by different types of words and found that function words rely on less context than content words. We investigate whether the effective context size varies across different types of words, by categorizing them based on either frequency or parts-ofspeech. Specifically, we vary the number of context tokens in the same way as the previous section, and aggregate loss over words within each class separately.", "cite_spans": [ { "start": 145, "end": 172, "text": "Boyd-Graber and Blei (2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Do different types of words need different amounts of context?", "sec_num": "4.1" }, { "text": "Infrequent words need more context than frequent words. We categorize words that appear at least 800 times in the training set as frequent, and the rest as infrequent. Figure 1c shows that the loss of frequent words is insensitive to missing context beyond the 50 most recent tokens, which holds across the two datasets. Infrequent words, on the other hand, require more than 200 tokens.", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 177, "text": "Figure 1c", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Do different types of words need different amounts of context?", "sec_num": "4.1" }, { "text": "Content words need more context than function words. Given the parts-of-speech of each word, we define content words as nouns, verbs and adjectives, and function words as prepositions and determiners. 4 Figure 1d shows that the loss of nouns and verbs is affected by distant context, whereas when the target word is a determiner, the model only relies on words within the last 10 tokens. Discussion. Overall, we find that the model's effective context size is dynamic. It depends on the target word, which is consistent with what we know about language, e.g., determiners require less context than nouns (Boyd-Graber and Blei, 2009) . In addition, these findings are consistent with those previously reported for different language models and datasets (Hill et al., 2016; Wang and Cho, 2016).", "cite_spans": [ { "start": 201, "end": 202, "text": "4", "ref_id": null }, { "start": 604, "end": 632, "text": "(Boyd-Graber and Blei, 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 203, "end": 212, "text": "Figure 1d", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Do different types of words need different amounts of context?", "sec_num": "4.1" }, { "text": "An effective context size of 200 tokens allows for representing linguistic information at many levels of abstraction, such as words, sentences, topics, etc. In this section, we investigate the importance of contextual information such as word order and word identity. Unlike prior work that studies LSTM embeddings at the sentence level, we look at both nearby and faraway context, and analyze how the language model treats contextual information presented in different regions of the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nearby vs. long-range context", "sec_num": "5" }, { "text": "Adi et al. 2017have shown that LSTMs are aware of word order within a sentence. We investigate whether LSTM language models are sensitive to word order within a larger context window. To determine the range in which word order affects model performance, we permute substrings in the context to observe their effect on dev loss compared to the unperturbed baseline. In particular, we perturb the context as follows,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does word order matter?", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "permute (w t 1 , . . . , w t n ) = (w t 1 , .., \u21e2(w t s 1 1 , .., w t s 2 ), .., w t n )", "eq_num": "(2)" } ], "section": "Does word order matter?", "sec_num": "5.1" }, { "text": "where \u21e2 2 {shu\u270fe, reverse} and (s 1 , s 2 ] denotes the range of the substring to be permuted. We refer to this substring as the permutable span. For the following analysis, we distinguish local word order, within 20-token permutable spans which are the length of an average sentence, from global word order, which extends beyond local spans to include all the farthest tokens in the history. We consider selecting permutable spans within a context of n = 300 tokens, which is greater than the effective context size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does word order matter?", "sec_num": "5.1" }, { "text": "Local word order only matters for the most recent 20 tokens. We can locate the region of context beyond which the local word order has no relevance, by permuting word order locally at various points within the context. We accomplish this by varying s 1 and setting s 2 = s 1 + 20. Figure 2a shows that local word order matters very much within the most recent 20 tokens, and far less beyond that.", "cite_spans": [], "ref_spans": [ { "start": 281, "end": 290, "text": "Figure 2a", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Does word order matter?", "sec_num": "5.1" }, { "text": "Global order of words only matters for the most recent 50 tokens. Similar to the local word order experiment, we locate the point beyond which the general location of words within the context is irrelevant, by permuting global word order. We achieve this by varying s 1 and fixing s 2 = n. Figure 2b demonstrates that after 50 tokens, shuffling or reversing the remaining words in the context has no effect on the model performance.", "cite_spans": [], "ref_spans": [ { "start": 290, "end": 296, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Does word order matter?", "sec_num": "5.1" }, { "text": "In order to determine whether this is due to insensitivity to word order or whether the language model is simply not sensitive to any changes in the long-range context, we further replace words in the permutable span with a randomly sampled sequence of the same length from the training set. The gap between the permutation and replacement curves in Figure 2b illustrates that the identity of words in the far away context is still relevant, and only the order of the words is not.", "cite_spans": [], "ref_spans": [ { "start": 350, "end": 359, "text": "Figure 2b", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Does word order matter?", "sec_num": "5.1" }, { "text": "Discussion. These results suggest that word order matters only within the most recent sentence, beyond which the order of sentences matters for 2-3 sentences (determined by our experiments on global word order). After 50 tokens, word order has almost no effect, but the identity of those words is still relevant, suggesting a high-level, rough semantic representation for these faraway words. In light of these observations, we define 50 tokens as the boundary between nearby and longrange context, for the rest of this study. Next, we investigate the importance of different word types in the different regions of context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does word order matter?", "sec_num": "5.1" }, { "text": "Open-class or content words such as nouns, verbs, adjectives and adverbs, contribute more to the semantic context of natural language than function words such as determiners and prepositions. Given our observation that the language model represents long-range context as a rough semantic representation, a natural question to ask is how important are function words in the long-range context? Below, we study the effect of these two classes of words on the model's performance. Function words are defined as all words that are not nouns, verbs, adjectives or adverbs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of words and the region of context", "sec_num": "5.2" }, { "text": "Content words matter more than function words. To study the effect of content and function words on model perplexity, we drop them from different regions of the context and compare the resulting change in loss. Specifically, we perturb the context as follows,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of words and the region of context", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "drop (w t 1 , . . . , w t n ) = (w t 1 , .., w t s 1 , f pos (y, (w t s 1 1 , .., w t n )))", "eq_num": "(3)" } ], "section": "Types of words and the region of context", "sec_num": "5.2" }, { "text": "where f pos (y, span) is a function that drops all words with POS tag y in a given span. s 1 denotes the starting offset of the perturbed subsequence. For these experiments, we set s 1 2 {5, 20, 100}. On average, there are slightly more content words than function words in any given text. As shown in Section 4, dropping more words results in higher loss. To eliminate the effect of dropping different fractions of words, for each experiment where we drop a specific word type, we add a control experiment where the same number of tokens are sampled randomly from the context, and dropped. Figure 3 shows that dropping content words as close as 5 tokens from the target word increases model perplexity by about 65%, whereas dropping the same proportion of tokens at random, results in a much smaller 17% increase. Dropping all function words, on the other hand, is not very different from dropping the same proportion of words at random, but still increases loss by about 15%. This suggests that within the most recent sentence, content words are extremely important but function words are also relevant since they help maintain grammaticality and syntactic structure. On the other hand, beyond a sentence, only content words have a sizeable influence on model performance.", "cite_spans": [], "ref_spans": [ { "start": 591, "end": 599, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Types of words and the region of context", "sec_num": "5.2" }, { "text": "As shown in Section 5.1, LSTM language models use a high-level, rough semantic representation for long-range context, suggesting that they might not be using information from any specific words located far away. Adi et al. (2017) have also shown that while LSTMs are aware of which words appear in their context, this awareness degrades with increasing length of the sequence. However, the success of copy mechanisms such as attention and caching (Bahdanau et al., 2015; Hill et al., 2016; Merity et al., 2017; Grave et al., 2017a,b) suggests that information in the distant context is very useful. Given this fact, can LSTMs copy any words from context without relying on external copy mechanisms? Do they copy words from nearby and long-range context equally? How does the caching model help? In this section, we investigate these questions by studying how LSTMs copy words from different regions of context. More specifically, we look at two regions of context, nearby (within 50 most recent tokens) and longrange (beyond 50 tokens), and study three categories of target words: those that can be copied from nearby context (C near ), those that can only be copied from long-range context (C far ), and those that cannot be copied at all given a limited context (C none ).", "cite_spans": [ { "start": 212, "end": 229, "text": "Adi et al. (2017)", "ref_id": "BIBREF0" }, { "start": 447, "end": 470, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF1" }, { "start": 471, "end": 489, "text": "Hill et al., 2016;", "ref_id": null }, { "start": 490, "end": 510, "text": "Merity et al., 2017;", "ref_id": null }, { "start": 511, "end": 533, "text": "Grave et al., 2017a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "To cache or not to cache?", "sec_num": "6" }, { "text": "Even without a cache, LSTMs often regenerate words that have already appeared in prior context. We investigate how much the model relies on the previous occurrences of the upcoming target word, by analyzing the change in loss after dropping and replacing this target word in the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Can LSTMs copy words without caches?", "sec_num": "6.1" }, { "text": "LSTMs can regenerate words seen in nearby context. In order to demonstrate the usefulness Words that can only be copied from long-range context are more sensitive to dropping all the distant words than to dropping the target. For words that can be copied from nearby context, dropping only the target has a much larger effect on loss compared to dropping the long-range context. (b) Replacing the target word with other tokens from vocabulary hurts more than dropping it from the context, for words that can be copied from nearby context, but has no effect on words that can only be copied from far away.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Can LSTMs copy words without caches?", "sec_num": "6.1" }, { "text": "of target word occurrences in context, we experiment with dropping all the distant context versus dropping only occurrences of the target word from the context. In particular, we compare removing all tokens after the 50 most recent tokens, (Equation 1 with n = 50), versus removing only the target word, in context of size n = 300:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Can LSTMs copy words without caches?", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "drop (w t 1 , . . . , w t n ) = f word (w t , (w t 1 , . . . , w t n )),", "eq_num": "(4)" } ], "section": "Can LSTMs copy words without caches?", "sec_num": "6.1" }, { "text": "where f word (w, span) drops words equal to w in a given span. We compare applying both perturbations to a baseline model with unperturbed context restricted to n = 300. We also include the target words that never appear in the context (C none ) as a control set for this experiment. The results show that LSTMs rely on the rough semantic representation of the faraway context to generate C far , but direclty copy C near from the nearby context. In Figure 4a , the long-range context bars show that for words that can only be copied from long-range context (C far ), removing all distant context is far more disruptive than removing only occurrences of the target word (12% and 2% increase in perplexity, respectively). This suggests that the model relies more on the rough semantic representation of faraway context to predict these C far tokens, rather than directly copying them from the distant context. On the other hand, for words that can be copied from nearby context (C near ), removing all long-range context has a smaller effect (about 3.5% increase in perplexity) as seen in Figure 4a , compared to removing the target word which increases perplexity by almost 9%. This suggests that these C near tokens are more often copied from nearby context, than inferred from information found in the rough semantic representation of long-range context. However, is it possible that dropping the target tokens altogether, hurts the model too much by adversely affecting grammaticality of the context? We test this theory by replacing target words in the context with other words from the vocabulary. This perturbation is similar to Equation 4, except instead of dropping the token, we replace it with a different one. In particular, we experiment with replacing the target with , to see if having the generic word is better than not having any word. We also replace it with a word that has the same part-of-speech tag and a similar frequency in the dataset, to observe how much this change confuses the model. Figure 4b shows that replacing the target with other words results in up to a 14% increase in perplexity for C near , which suggests that the replacement token seems to confuse the model far more than when the token is simply dropped. However, the words that rely on the long-range context, C far , are largely unaffected by these changes, which confirms our conclusion from dropping the target tokens: C far witnesses in the morris film served up as a solo however the music lacks the UNK provided by a context within another medium UNK of mr. glass may agree with the critic richard UNK 's sense that the NUM music in twelve parts is as UNK and UNK as the UNK UNK but while making the obvious point that both UNK develop variations from themes this comparison UNK the intensely UNK nature of mr. glass snack-food UNK increased a strong NUM NUM in the third quarter while domestic profit increased in double UNK mr. calloway said excluding the british snack-food business acquired in july snack-food international UNK jumped NUM NUM with sales strong in spain mexico and brazil total snack-food profit rose NUM NUM led by pizza hut and UNK bell restaurant earnings increased about NUM NUM in the third quarter on a NUM NUM sales increase UNK sales for pizza hut rose about NUM NUM while UNK bell 's increased NUM NUM as the chain continues to benefit from its UNK strategy UNK bell has turned around declining customer counts by permanently lowering the price of its UNK same UNK for kentucky fried chicken which has struggled with increased competition in the fast-food chicken market and a lack of new products rose only NUM NUM the operation which has been slow to respond to consumers ' shifting UNK away from fried foods has been developing a UNK product that may be introduced nationally at the end of next year the new product has performed well in a market test in las vegas nev. mr. calloway send a delegation of congressional staffers to poland to assist its legislature the UNK in democratic procedures senator pete UNK calls this effort the first gift of democracy the poles might do better to view it as a UNK horse it is the vast shadow government of NUM congressional staffers that helps create such legislative UNK as the NUM page UNK reconciliation bill that claimed to be the budget of the united states maybe after the staffers explain their work to the poles they 'd be willing to come back and do the same for the american people UNK UNK plc a financially troubled irish maker of fine crystal and UNK china reported that its pretax loss for the first six months widened to NUM million irish punts $ NUM million from NUM million irish punts a year earlier the results for the half were worse than market expectations which suggested an interim loss of around NUM million irish punts in a sharply weaker london market yesterday UNK shares were down NUM pence at NUM pence NUM cents the company reported a loss after taxation and minority interests of NUM million irish sim has set a fresh target of $ NUM a share by the end of reaching that goal says robert t. UNK applied 's chief financial officer will require efficient reinvestment of cash by applied and UNK of its healthy NUM NUM rate of return on operating capital in barry wright mr. sim sees a situation very similar to the one he faced when he joined applied as president and chief operating officer in NUM applied then a closely held company was UNK under the management of its controlling family while profitable it was n't growing and was n't providing a satisfactory return on invested capital he says mr. sim is confident that the drive to dominate certain niche markets will work at barry wright as it has at applied he also UNK an UNK UNK to develop a corporate culture that rewards managers who produce and where UNK is shared mr. sim considers the new unit 's operations fundamentally sound and adds that barry wright has been fairly successful in moving into markets that have n't interested larger competitors with a little patience these businesses will perform very UNK mr. sim was openly sympathetic to swapo shortly after that mr. UNK had scott stanley arrested and his UNK confiscated mr. stanley is on trial over charges that he violated a UNK issued by the south african administrator general earlier this year which made it a crime punishable by two years in prison for any person to UNK UNK or UNK the election commission the stanley affair does n't UNK well for the future of democracy or freedom of anything in namibia when swapo starts running the government to the extent mr. stanley has done anything wrong it may be that he is out of step with the consensus of world intellectuals that the UNK guerrillas were above all else the victims of UNK by neighboring south africa swapo has enjoyed favorable western media treatment ever since the u.n. general assembly declared it the sole UNK representative of namibia 's people in last year the u.s. UNK a peace settlement to remove cuba 's UNK UNK from UNK and hold free and fair elections that would end south africa 's control of namibia the elections are set for nov. NUM in july mr. stanley july snack-food international UNK jumped NUM NUM with sales strong in spain mexico and brazil total snack-food profit rose NUM NUM led by pizza hut and UNK bell restaurant earnings increased about NUM NUM in the third quarter on a NUM NUM sales increase UNK sales for pizza hut rose about NUM NUM while UNK bell 's increased NUM NUM as the chain continues to benefit from its UNK strategy UNK bell has turned around declining customer counts by permanently lowering the price of its UNK same UNK for kentucky fried chicken which has struggled with increased competition in the fast-food chicken market and a lack of new products rose only NUM NUM the operation which has been slow to respond to consumers ' shifting UNK away from fried foods has been developing a UNK product that may be introduced nationally at the end of next year the new product has performed well in a market test in las vegas nev. mr. calloway said after a four-year $ NUM billion acquisition binge that brought a major soft-drink company soda UNK a fast-food chain and an overseas snack-food giant to pepsi mr. calloway of london 's securities traders it was a day that started nervously in the small hours by UNK the selling was at UNK fever but as the day ended in a UNK wall UNK rally the city UNK a sigh of relief so it went yesterday in the trading rooms of london 's financial district in the wake of wall street 's plunge last friday the london market was considered especially vulnerable and before the opening of trading here yesterday all eyes were on early trading in tokyo for a clue as to how widespread the fallout Figure 5 : Success of neural cache on PTB. Brightly shaded region shows peaky distribution. management equity participation further many institutions today holding troubled retailers ' debt securities will be UNK to consider additional retailing investments it 's called bad money driving out good money said one retailing UNK institutions that usually buy retail paper have to be more concerned however the lower prices these retail chains are now expected to bring should make it easier for managers to raise the necessary capital and pay back the resulting debt in addition the fall selling season has generally been a good one especially for those retailers dependent on apparel sales for the majority of their revenues what 's encouraging about this is that retail chains will be sold on the basis of their sales and earnings not liquidation values said joseph e. brooks chairman and chief offerings outside the u.s. goldman sachs & co. will manage the offering macmillan said berlitz intends to pay quarterly dividends on the stock the company said it expects to pay the first dividend of NUM cents a share in the NUM first quarter berlitz will borrow an amount equal to its expected net proceeds from the offerings plus $ NUM million in connection with a credit agreement with lenders the total borrowing will be about $ NUM million the company said proceeds from the borrowings under the credit agreement will be used to pay an $ NUM million cash dividend to macmillan and to lend the remainder of about $ NUM million to maxwell communications in connection with a UNK note proceeds from the offering will be used to repay borrowings under the short-term parts of a credit agreement berlitz which is based in princeton n.j. provides language instruction and translation services through more than NUM language centers in NUM countries in the past five years more than NUM NUM of its sales have been outside the u.s. macmillan has owned berlitz since NUM in the first six months said that despite losses on ual stock his firm 's health is excellent the stock 's decline also has left the ual board in a UNK although it may not be legally obligated to sell the company if the buy-out group ca n't revive its bid it may have to explore alternatives if the buyers come back with a bid much lower than the group 's original $ 300-a-share proposal at a meeting sept. NUM to consider the labor-management bid the board also was informed by its investment adviser first boston corp. of interest expressed by buy-out funds including kohlberg kravis roberts & co. and UNK little & co. as well as by robert bass morgan stanley 's buy-out fund and pan am corp the takeover-stock traders were hoping that mr. davis or one of the other interested parties might UNK with the situation in disarray or that the board might consider a recapitalization meanwhile japanese bankers said they were still UNK about accepting citicorp 's latest proposal macmillan inc. said it plans a public offering of NUM million shares of its berlitz international inc. unit at $ NUM to $ NUM a share capital markets to sell its hertz equipment rental corp. unit there is no pressing need to sell the unit but we are doing it so we can concentrate on our core business UNK automobiles in the u.s. and abroad said william UNK hertz 's executive vice president we are only going to sell at the right price hertz equipment had operating profit before depreciation of $ NUM million on revenue of $ NUM million in NUM the closely held hertz corp. had annual revenue of close to $ NUM billion in NUM of which $ NUM billion was contributed by its hertz rent a car operations world-wide hertz equipment is a major supplier of rental equipment in the u.s. france spain and the UNK it supplies commercial and industrial equipment including UNK UNK UNK and electrical equipment UNK UNK UNK and trucks UNK inc. reported a net loss of $ NUM million for the fiscal third quarter ended aug. NUM it said the loss resulted from UNK and introduction costs related to a new medical UNK equipment system in the year-earlier quarter the company reported net income of $ NUM or acquisition of nine businesses that make up the group the biggest portion of which was related to the NUM purchase of a UNK co. unit among other things the restructured facilities will substantially reduce the group 's required amortization of the term loan portion of the credit facilities through september NUM mlx said certain details of the restructured facilities remain to be negotiated the agreement is subject to completion of a definitive amendment and appropriate approvals william p. UNK mlx chairman and chief executive said the pact will provide mlx with the additional time and flexibility necessary to complete the restructuring of the company 's capital structure mlx has filed a registration statement with the securities and exchange commission covering a proposed offering of $ NUM million in long-term senior subordinated notes and warrants dow jones & co. said it acquired a NUM NUM interest in UNK corp. a subsidiary of oklahoma publishing co. oklahoma city that provides electronic research services terms were n't disclosed customers of either UNK or dow jones UNK are able to access the information on both services dow jones is the publisher of the wall street video games electronic information systems and playing cards posted a NUM NUM unconsolidated surge in pretax profit to NUM billion yen $ NUM million from NUM billion yen $ NUM million for the fiscal year ended aug. NUM sales surged NUM NUM to NUM billion yen from NUM billion net income rose NUM NUM to NUM billion yen from NUM billion UNK net fell to NUM yen from NUM yen because of expenses and capital adjustments without detailing specific product UNK UNK credited its bullish UNK in sales including advanced computer games and television entertainment systems to surging UNK sales in foreign markets export sales for leisure items alone for instance totaled NUM billion yen in the NUM months up from NUM billion in the previous fiscal year domestic leisure sales however were lower hertz corp. of park UNK n.j. said it retained merrill lynch capital markets to sell its hertz equipment rental corp. unit there is no pressing need to sell the unit but we are doing it so we can concentrate on our core business UNK automobiles in the u.s. and abroad said william UNK hertz 's executive vice president so-called road show to market the package around the world an increasing number of banks appear to be considering the option ", "cite_spans": [], "ref_spans": [ { "start": 450, "end": 459, "text": "Figure 4a", "ref_id": "FIGREF6" }, { "start": 1088, "end": 1097, "text": "Figure 4a", "ref_id": "FIGREF6" }, { "start": 2018, "end": 2027, "text": "Figure 4b", "ref_id": "FIGREF6" }, { "start": 9012, "end": 9020, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Can LSTMs copy words without caches?", "sec_num": "6.1" }, { "text": "If LSTMs can already regenerate words from nearby context, how are copy mechanisms helping the model? We answer this question by analyzing how the neural cache model (Grave et al., 2017b ) helps with improving model performance. The cache records the hidden state h t at each timestep t, and computes a cache distribution over the words in the history as follows:", "cite_spans": [ { "start": 166, "end": 186, "text": "(Grave et al., 2017b", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "How does the cache help?", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P cache (w t |w t 1 , . . . , w 1 ; h t , . . . , h 1 ) / t 1 X i=1 [w i = w t ] exp(\u2713h T i h t ),", "eq_num": "(5)" } ], "section": "How does the cache help?", "sec_num": "6.2" }, { "text": "where \u2713 controls the flatness of the distribution. This cache distribution is then interpolated with the model's output distribution over the vocabulary. Consequently, certain words from the history are upweighted, encouraging the model to copy them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How does the cache help?", "sec_num": "6.2" }, { "text": "Caches help words that can be copied from long-range context the most. In order to study the effectiveness of the cache for the three classes of words (C near , C far , C none ), we evaluate an LSTM language model with and without a cache, and measure the difference in perplexity for these words. In both settings, the model is provided all prior context (not just 300 tokens) in or- Figure 7 : Model performance relative to using a cache. Error bars represent 95% confidence intervals. Words that can only be copied from the distant context benefit the most from using a cache.", "cite_spans": [], "ref_spans": [ { "start": 385, "end": 393, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "How does the cache help?", "sec_num": "6.2" }, { "text": "der to replicate the Grave et al. (2017b) setup. The amount of history recorded, known as the cache size, is a hyperparameter set to 500 past timesteps for PTB and 3,875 for Wiki, both values very similar to the average document lengths in the respective datasets. We find that the cache helps words that can only be copied from long-range context (C far ) more than words that can be copied from nearby (C near ). This is illustrated by Figure 7 where without caching, C near words see a 22% increase in perplexity for PTB, and a 32% increase for Wiki, whereas C far see a 28% increase in perplexity for PTB, and a whopping 53% increase for Wiki. Thus, the cache is, in a sense, complementary to the standard model, since it especially helps regenerate words from the long-range context where the latter falls short.", "cite_spans": [ { "start": 21, "end": 41, "text": "Grave et al. (2017b)", "ref_id": null } ], "ref_spans": [ { "start": 438, "end": 446, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "How does the cache help?", "sec_num": "6.2" }, { "text": "However, the cache also hurts about 36% of the words in PTB and 20% in Wiki, which are words that cannot be copied from context (C none ), as illustrated by bars for \"none\" in Figure 7 . We also provide some case studies showing success ( Fig. 5 ) and failure (Fig. 6 ) modes for the cache. We find that for the successful case, the cache distribution is concentrated on a single word that it wants to copy. However, when the target is not present in the history, the cache distribution is more flat, illustrating the model's confusion, shown in Figure 6 . This suggests that the neural cache model might benefit from having the option to ignore the cache when it cannot make a confident choice.", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 184, "text": "Figure 7", "ref_id": null }, { "start": 239, "end": 245, "text": "Fig. 5", "ref_id": null }, { "start": 260, "end": 267, "text": "(Fig. 6", "ref_id": "FIGREF7" }, { "start": 546, "end": 554, "text": "Figure 6", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "How does the cache help?", "sec_num": "6.2" }, { "text": "The findings presented in this paper provide a great deal of insight into how LSTMs model context. This information can prove extremely useful for improving language models. For instance, the discovery that some word types are more important than others can help refine word dropout strategies by making them adaptive to the different word types. Results on the cache also show that we can further improve performance by allowing the model to ignore the cache distribution when it is extremely uncertain, such as in Figure 6 . Differences in nearby vs. long-range context suggest that memory models, which feed explicit context representations to the LSTM (Ghosh et al., 2016; Lau et al., 2017) , could benefit from representations that specifically capture information orthogonal to that modeled by the LSTM.", "cite_spans": [ { "start": 656, "end": 676, "text": "(Ghosh et al., 2016;", "ref_id": "BIBREF6" }, { "start": 677, "end": 694, "text": "Lau et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 516, "end": 524, "text": "Figure 6", "ref_id": "FIGREF7" } ], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "In addition, the empirical methods used in this study are model-agnostic and can generalize to models other than the standard LSTM. This opens the path to generating a stronger understanding of model classes beyond test set perplexities, by comparing them across additional axes of information such as how much context they use on average, or how robust they are to shuffled contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Given the empirical nature of this study and the fact that the model and data are tightly coupled, separating model behavior from language characteristics, has proved challenging. More specifically, a number of confounding factors such as vocabulary size, dataset size etc. make this separation difficult. In an attempt to address this, we have chosen PTB and Wiki -two standard language modeling datasets which are diverse in con-tent (news vs. factual articles) and writing style, and are structured differently (eg: Wiki articles are 4-6x longer on average and contain extra information such as titles and paragraph/section markers). Making the data sources diverse in nature, has provided the opportunity to somewhat isolate effects of the model, while ensuring consistency in results. An interesting extension to further study this separation would lie in experimenting with different model classes and even different languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Recently, Chelba et al. (2017) , in proposing a new model, showed that on PTB, an LSTM language model with 13 tokens of context is similar to the infinite-context LSTM performance, with close to an 8% 5 increase in perplexity. This is compared to a 25% increase at 13 tokens of context in our setup. We believe this difference is attributed to the fact that their model was trained with restricted context and a different error propagation scheme, while ours is not. Further investigation would be an interesting direction for future work.", "cite_spans": [ { "start": 10, "end": 30, "text": "Chelba et al. (2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "In this analytic study, we have empirically shown that a standard LSTM language model can effectively use about 200 tokens of context on two benchmark datasets, regardless of hyperparameter settings such as model size. It is sensitive to word order in the nearby context, but less so in the long-range context. In addition, the model is able to regenerate words from nearby context, but heavily relies on caches to copy words from far away. These findings not only help us better understand these models but also suggest ways for improving them, as discussed in Section 7. While observations in this paper are reported at the token level, deeper understanding of sentence-level interactions warrants further investigation, which we leave to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Words at the beginning of the test sequence with fewer than n tokens in the context are ignored for loss computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We obtain part-of-speech tags using Stanford CoreNLP(Manning et al., 2014).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Table 3, 91 perplexity for the 13-gram vs. 84 for the infinite context model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Arun Chaganty, Kevin Clark, Reid Pryzant, Yuhao Zhang and our anonymous reviewers for their thoughtful comments and suggestions. We gratefully acknowledge support of the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF15-1-0462 and the NSF via grant IIS-1514268.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": " Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017b. Improving Neural Language Models with a Continuous Cache.International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=B184E5qee.", "cite_spans": [ { "start": 1, "end": 51, "text": "Edouard Grave, Armand Joulin, and Nicolas Usunier.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "Graves.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alex", "sec_num": null }, { "text": "Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 https://arxiv.org/pdf/1308.0850.pdf.Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle:Reading children's books with explicit memory representations.International Conference on Learning Representations (ICLR) https://arxiv.org/pdf/1511.02301.pdf.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2013.", "sec_num": null }, { "text": "Hochreiter and J\u00fcrgen Schmidhuber. 1997.Long short-term memory. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sepp", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Finegrained analysis of sentence embeddings using auxiliary prediction tasks", "authors": [ { "first": "Yossi", "middle": [], "last": "Adi", "suffix": "" }, { "first": "Einat", "middle": [], "last": "Kermany", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Lavi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine- grained analysis of sentence embeddings using auxiliary prediction tasks. International Con- ference on Learning Representations (ICLR) https://openreview.net/pdf?id=BJh6Ztuxl.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (ICLR) https://arxiv.org/pdf/1409.0473.pdf.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Syntactic topic models", "authors": [ { "first": "Jordan", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "-", "middle": [], "last": "Graber", "suffix": "" }, { "first": "David", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2009, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "185--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordan Boyd-Graber and David Blei. 2009. Syn- tactic topic models. In Advances in neu- ral information processing systems. pages 185- 192. https://papers.nips.cc/paper/3398-syntactic- topic-models.pdf.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "N-gram language modeling using recurrent neural network estimation", "authors": [ { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.10724" ] }, "num": null, "urls": [], "raw_text": "Ciprian Chelba, Mohammad Norouzi, and Samy Bengio. 2017. N-gram language model- ing using recurrent neural network esti- mation. arXiv preprint arXiv:1703.10724", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Language modeling with gated convolutional networks", "authors": [ { "first": "Angela", "middle": [], "last": "Yann N Dauphin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Fan", "suffix": "" }, { "first": "David", "middle": [], "last": "Auli", "suffix": "" }, { "first": "", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2017, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. Interna- tional Conference on Machine Learning (ICML) https://arxiv.org/pdf/1612.08083.pdf.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A theoretically grounded application of dropout in recurrent neural networks", "authors": [ { "first": "Yarin", "middle": [], "last": "Gal", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2016, "venue": "Advances in neural information processing systems (NIPS)", "volume": "", "issue": "", "pages": "1019--1027", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in neural informa- tion processing systems (NIPS). pages 1019-1027. https://arxiv.org/pdf/1512.05287.pdf.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Contextual lstm (clstm) models for large scale nlp tasks. Workshop on Large-scale Deep Learning for Data Mining", "authors": [ { "first": "Shalini", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Strope", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and Larry Heck. 2016. Contextual lstm (clstm) models for large scale nlp tasks. Work- shop on Large-scale Deep Learning for Data Min- ing, KDD https://arxiv.org/pdf/1602.06291.pdf.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Unbounded cache model for online language modeling with open vocabulary", "authors": [ { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "M", "middle": [], "last": "Moustapha", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Cisse", "suffix": "" }, { "first": "", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "6044--6054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edouard Grave, Moustapha M Cisse, and Ar- mand Joulin. 2017a. Unbounded cache model for online language modeling with open vo- cabulary. In Advances in Neural Information Processing Systems (NIPS). pages 6044-6054. https://papers.nips.cc/paper/7185-unbounded- cache-model-for-online-language-modeling-with- open-vocabulary.pdf. national Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=r1aPbsFle.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Exploring the limits of language modeling", "authors": [ { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.02410" ] }, "num": null, "urls": [], "raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schus- ter, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language mod- eling. arXiv preprint arXiv:1602.02410", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Topically Driven Neural Language Model", "authors": [ { "first": "Timothy", "middle": [], "last": "Jey Han Lau", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-1033" ] }, "num": null, "urls": [], "raw_text": "Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017. Topically Driven Neural Language Model. Association for Computational Linguistics (ACL) https://doi.org/10.18653/v1/P17-1033.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Visualizing and understanding neural models in nlp", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xinlei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "North American Association of Computational Linguistics (NAACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in nlp. North American As- sociation of Computational Linguistics (NAACL) http://www.aclweb.org/anthology/N16-1082.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Assessing the ability of lstms to learn syntax-sensitive dependencies", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics (TACL) http://aclweb.org/anthology/Q16-1037.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": { "DOI": [ "10.3115/v1/P14-5010" ] }, "num": null, "urls": [], "raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural lan- guage processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. pages 55-60. https://doi.org/10.3115/v1/P14-5010.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Building a large annotated corpus of english: The penn treebank", "authors": [ { "first": "P", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Marcinkiewicz", "suffix": "" }, { "first": "", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1993, "venue": "Computational linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn tree- bank. Computational linguistics 19(2):313-330. http://aclweb.org/anthology/J93-2004.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "On the State of the Art of Evaluation in", "authors": [ { "first": "Gabor", "middle": [], "last": "Melis", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2018, "venue": "Neural Language Models. International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabor Melis, Chris Dyer, and Phil Blunsom. 2018. On the State of the Art of Evalua- tion in Neural Language Models. International Conference on Learning Representations (ICLR) https://openreview.net/pdf?id=ByJHuTgA-.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Regularizing and Optimizing LSTM Language Models. International Conference on Learning Representations (ICLR", "authors": [ { "first": "Stephen", "middle": [], "last": "Merity", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and Optimizing LSTM Language Models. International Con- ference on Learning Representations (ICLR) https://openreview.net/pdf?id=SyyGPP0TZ.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Recurrent neural network based language model", "authors": [ { "first": "Tom\u00e1\u0161", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "Luk\u00e1\u0161", "middle": [], "last": "Burget", "suffix": "" }, { "first": "Ja\u0148", "middle": [], "last": "Cernock\u1ef3", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2010, "venue": "Eleventh Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Ja\u0148 Cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Using the output embedding to improve language models", "authors": [ { "first": "Ofir", "middle": [], "last": "Press", "suffix": "" }, { "first": "Lior", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2017, "venue": "European Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ofir Press and Lior Wolf. 2017. Using the output em- bedding to improve language models. European Chapter of the Association for Computational Lin- guistics http://aclweb.org/anthology/E17-2025.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Regularization of neural networks using dropconnect", "authors": [ { "first": "Li", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Zeiler", "suffix": "" }, { "first": "Sixin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Le Cun", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2013, "venue": "International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "1058--1066", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. 2013. Regularization of neural networks using dropconnect. In International Con- ference on Machine Learning (ICML). pages 1058- 1066.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "vs. infrequent words. (d) Different parts-of-speech." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Effects of varying the number of tokens provided in the context, as compared to the same model provided with infinite context. Increase in loss represents an absolute increase in NLL over the entire corpus, due to restricted context. All curves are averaged over three random seeds, and error bars represent the standard deviation. (a) The model has an effective context size of 150 on PTB and 250 on Wiki. (b) Changing model hyperparameters does not change the context usage trend, but does change model performance. We report perplexities to highlight the consistent trend. (c) Infrequent words need more context than frequent words. (d) Content words need more context than function words." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "(a) Perturb order locally, within 20 tokens of each point. (b) Perturb global order, i.e. all tokens in the context before a given point, in Wiki." }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "Effects of shuffling and reversing the order of words in 300 tokens of context, relative to an unperturbed baseline. All curves are averages from three random seeds, where error bars represent the standard deviation. (a) Changing the order of words within a 20-token window has negligible effect on the loss after the first 20 tokens. (b) Changing the global order of words within the context does not affect loss beyond 50 tokens." }, "FIGREF4": { "num": null, "type_str": "figure", "uris": null, "text": "Effect of dropping content and function words from 300 tokens of context relative to an unperturbed baseline, on PTB. Error bars represent 95% confidence intervals. Dropping both content and function words 5 tokens away from the target results in a nontrivial increase in loss, whereas beyond 20 tokens, only content words are relevant." }, "FIGREF5": { "num": null, "type_str": "figure", "uris": null, "text": "(a) Dropping tokens (b) Perturbing occurrences of target word in context." }, "FIGREF6": { "num": null, "type_str": "figure", "uris": null, "text": "Effects of perturbing the target word in the context compared to dropping long-range context altogether, on PTB. Error bars represent 95% confidence intervals. (a)" }, "FIGREF7": { "num": null, "type_str": "figure", "uris": null, "text": "Failure of neural cache on PTB. Lightly shaded regions show flat distribution. words are predicted from the rough representation of faraway context instead of specific occurrences of certain words." }, "TABREF1": { "num": null, "html": null, "content": "", "type_str": "table", "text": "Dataset statistics and performance relevant to our experiments." } } } }