{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:05:46.487232Z" }, "title": "On-The-Fly Information Retrieval Augmentation for Language Models", "authors": [ { "first": "Hai", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toyota Technological Institute at Chicago", "location": { "settlement": "Chicago", "region": "IL", "country": "USA" } }, "email": "haiwang@ttic.edu" }, { "first": "David", "middle": [], "last": "Mcallester", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toyota Technological Institute at Chicago", "location": { "settlement": "Chicago", "region": "IL", "country": "USA" } }, "email": "mcallester@ttic.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Here we experiment with the use of information retrieval as an augmentation for pretrained language models. The text corpus used in information retrieval can be viewed as form of episodic memory which grows over time. By augmenting GPT 2.0 with information retrieval we achieve a zero shot 15% relative reduction in perplexity on Gigaword corpus without any retraining. We also validate our IR augmentation on an event co-reference task.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Here we experiment with the use of information retrieval as an augmentation for pretrained language models. The text corpus used in information retrieval can be viewed as form of episodic memory which grows over time. By augmenting GPT 2.0 with information retrieval we achieve a zero shot 15% relative reduction in perplexity on Gigaword corpus without any retraining. We also validate our IR augmentation on an event co-reference task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We are interested in exploring the value of long term episodic memory in language modeling. For example, a language model can be used in January to assign a probability distribution over the statements that will appear in the newspaper in March. But one month later, in February, the distribution over the predictions for March should be updated to take into account factual developments since the previous prediction. Long term episodic memory should be taken into account when assigning a probability to a statement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here we take a simple approach in which a pretrained GPT language model (Radford et al., 2018a (Radford et al., , 2019 is zero-shot augmented with an episodic memory consisting simply of a corpus of past news articles. Conceptually the past news articles are viewed as additional training data which can be legitimately accessed when evaluating on future text. In our most basic experiment we calculate the probability of a future article by first calculating the probability of its first k sentences using the pre-trained GPT model. We then use the first k sentences as a query in an information retrieval system to extract a relevant past article. We then insert the past article following the first k sentences when calculating the probability of the remainder of the future article using the same pre-trained GPT model. This is a zero-shot augmentation in the sense that there is no additional training or fine tuning of the pre-trained model. Our results show that this augmentation significantly reduces perplexity. We also present various other experiments including results on fine-tuning the model in the presence of the memory and the effect of this memory on event co-reference.", "cite_spans": [ { "start": 72, "end": 94, "text": "(Radford et al., 2018a", "ref_id": "BIBREF23" }, { "start": 95, "end": 118, "text": "(Radford et al., , 2019", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Various language models have utilized external knowledge or long contexts (Paperno et al., 2016; Yang and Mitchell, 2017; Peng et al., 2019; Khandelwal et al., 2018; Ghosh et al., 2016; Lau et al., 2017; Grave et al., 2016; Parthasarathi and Pineau, 2018) . But these papers do not address the question of whether additional context or external knowledge is useful as a zero-shot augmentation of large scale pre-trained NLP models.", "cite_spans": [ { "start": 74, "end": 96, "text": "(Paperno et al., 2016;", "ref_id": "BIBREF19" }, { "start": 97, "end": 121, "text": "Yang and Mitchell, 2017;", "ref_id": "BIBREF34" }, { "start": 122, "end": 140, "text": "Peng et al., 2019;", "ref_id": "BIBREF22" }, { "start": 141, "end": 165, "text": "Khandelwal et al., 2018;", "ref_id": "BIBREF11" }, { "start": 166, "end": 185, "text": "Ghosh et al., 2016;", "ref_id": "BIBREF7" }, { "start": 186, "end": 203, "text": "Lau et al., 2017;", "ref_id": "BIBREF15" }, { "start": 204, "end": 223, "text": "Grave et al., 2016;", "ref_id": "BIBREF8" }, { "start": 224, "end": 255, "text": "Parthasarathi and Pineau, 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The value of external knowledge has previously been demonstrated for NLP tasks such as natural language inference Yang et al., 2019) , language generation (Parthasarathi and Pineau, 2018) , knowledge base completion (Toutanova et al., 2015; Das et al., 2017) and question answering (Sun et al., 2019 (Sun et al., , 2018 Dhingra et al., 2017) . However, all those prior works assume the model is small and trained from scratch.", "cite_spans": [ { "start": 114, "end": 132, "text": "Yang et al., 2019)", "ref_id": "BIBREF35" }, { "start": 155, "end": 187, "text": "(Parthasarathi and Pineau, 2018)", "ref_id": "BIBREF21" }, { "start": 216, "end": 240, "text": "(Toutanova et al., 2015;", "ref_id": "BIBREF31" }, { "start": 241, "end": 258, "text": "Das et al., 2017)", "ref_id": "BIBREF5" }, { "start": 282, "end": 299, "text": "(Sun et al., 2019", "ref_id": "BIBREF29" }, { "start": 300, "end": 319, "text": "(Sun et al., , 2018", "ref_id": "BIBREF30" }, { "start": 320, "end": 341, "text": "Dhingra et al., 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As large scale pre-trained models have become more powerful it is not immediately clear whether external resources can still add value. The only work we know of on using external resources in modern large scale models is Yang et al. (2019) where a human curated external lexical resource is used to improve BERT.", "cite_spans": [ { "start": 221, "end": 239, "text": "Yang et al. (2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our approach bears some resemblance to neural cache models (Grave et al., 2016) . However, neural cache models store past hidden states as memory and accesses them through a dot product with the current hidden states. This is different from retrieving knowledge from a corpus-sized memory.", "cite_spans": [ { "start": 59, "end": 79, "text": "(Grave et al., 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our approach is also somewhat related to memory networks (Weston et al., 2014) . Memory networks have a memory module which can be learnt jointly with other components. It has shown success in applications such as machine reading comprehension (Kumar et al., 2016a,b; Shi et al., 2016) and visual question answering (Na et al., 2017; Ma et al., 2018; Su et al., 2018) . Significant progress in memory networks has been achieved in both architecture (Chandar et al., 2016; Miller et al., 2016; Gulcehre et al., 2017) and model scale (Rae et al., 2016; Lample et al., 2019) .", "cite_spans": [ { "start": 57, "end": 78, "text": "(Weston et al., 2014)", "ref_id": null }, { "start": 244, "end": 267, "text": "(Kumar et al., 2016a,b;", "ref_id": null }, { "start": 268, "end": 285, "text": "Shi et al., 2016)", "ref_id": "BIBREF27" }, { "start": 316, "end": 333, "text": "(Na et al., 2017;", "ref_id": "BIBREF18" }, { "start": 334, "end": 350, "text": "Ma et al., 2018;", "ref_id": "BIBREF16" }, { "start": 351, "end": 367, "text": "Su et al., 2018)", "ref_id": "BIBREF28" }, { "start": 449, "end": 471, "text": "(Chandar et al., 2016;", "ref_id": "BIBREF1" }, { "start": 472, "end": 492, "text": "Miller et al., 2016;", "ref_id": "BIBREF17" }, { "start": 493, "end": 515, "text": "Gulcehre et al., 2017)", "ref_id": "BIBREF9" }, { "start": 532, "end": 550, "text": "(Rae et al., 2016;", "ref_id": "BIBREF26" }, { "start": 551, "end": 571, "text": "Lample et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Several papers have formulated, and experimented with, scalable memory networks -memory networks that employ some method of efficiently reading and writing to very large neural memories. This is done with approximate nearest neighbor methods in Rae et al. (2016) and with product keys in Lample et al. (2019) . These large memories are used to provide additional model capacity where the memory contents are trained over a large data set using gradient descent training, just as one would train the parameters of a very large network. It is shown in Lample et al. (2019) that it is possible to insert a large memory as a layer in a transformer architecture resulting a model where the same number of parameters and the same performance can be achieved with half the layers and with much faster training time than a standard transformer architecture. Here, however, we are proposing zero-shot augmentation with an external data source used as an episodic memory.", "cite_spans": [ { "start": 245, "end": 262, "text": "Rae et al. (2016)", "ref_id": "BIBREF26" }, { "start": 288, "end": 308, "text": "Lample et al. (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The use of key-value memories in Miller et al. (2016) is particularly similar to our model. Keyvalue memories were used there in treating a corpus of Wikipedia movie pages as a memory for answering questions about movies. As in our system, articles were extracted using word based information retrieval. Each article was encoded as a vector which was then given to a question answering architecture. This was shown to improve on automated knowledge base extraction from the same corpus but was still not competitive with human curated knowledge graphs for movies. Here we give the text of the retrieved article directly to the language model architecture and focus on augmenting large scale language models.", "cite_spans": [ { "start": 33, "end": 53, "text": "Miller et al. (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We use the pre-trained transformer GPT 2.0 (Radford et al., 2019). Let W w and W p be the subword and position embeddings respectively. Let M denote the total number of layers, for a token at time step t, the m-th layer's hidden state h m t is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "h m t = W w + W p if m = 0 TB(h m\u22121 t ) if 1 \u2264 m \u2264 M", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "where TB stands for Transformer Block. We use last layer's hidden state h M t as the presentation H t for the token at time step t. We augment GPT 2.0 with a large episodic memory component, and the overall architecture is shown in Figure 1 . For a sequence S with T tokens, let S 1 , . . ., S p be the tokens of the first k sentences. Let C be a sequence (article) retrieved from memory using the first k sentences as the query, the vector H t is:", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 240, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "H t = GPT(S 1 , . . . , S t ), if t \u2264 p GPT(S 1 , . . . , S p , C, . . . , S t ), otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "That's to say, for the first k sentences, we directly feed them to GPT to obtain their representations. For remaining sentences, their representations are conditioned on both the first k sentences and the retrieved context C. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "We focus on two tasks: document level language modelling and event co-retrieved . In both tasks we take a document as input and use first k sentences to query the memory. To calculate the perplexity of a document, we compute the log-probability of a document by multiplying byte level probability, (Kumar et al., 2016b) ; SAM: Sparse Access Memory (Rae et al., 2016) ; KVM: Key Value Memory (Miller et al., 2016) ; LMN: Large Memory Network (Lample et al., 2019) . Memory size is measured in their own words.", "cite_spans": [ { "start": 298, "end": 319, "text": "(Kumar et al., 2016b)", "ref_id": "BIBREF13" }, { "start": 348, "end": 366, "text": "(Rae et al., 2016)", "ref_id": "BIBREF26" }, { "start": 391, "end": 412, "text": "(Miller et al., 2016)", "ref_id": "BIBREF17" }, { "start": 441, "end": 462, "text": "(Lample et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "then divide the log-probability by the actual word count in the query document. We use Gigaword (Parker et al., 2011) as both our language modeling test set and as our external memory. Gigaword contains news from different sources such as NY Times and XinHua News etc. For language modelling we use the NY Times portion because it is written by native English speakers. Since GPT 2.0 is trained on Common Crawl which contains news collections started from 2008. To avoid testing on GPT-2 training data, we use Gigaword articles collected prior to 2008. For the pre-trained language model we use GPT 2.0 (Radford et al., 2019) 1 . It contains three pre-trained models: GPT Small, Medium and Large.", "cite_spans": [ { "start": 96, "end": 117, "text": "(Parker et al., 2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For information retrieval we use Lucene due to its simplicity. Given a query document we first do sentence and word tokenization and then use the first k sentences to retrieve top 20 retrieved documents with the default TF-IDF distance metric provided by Lucene. Since too distant document pairs are uninformative and too related document pairs tends to be duplicates of the test article, we further filter those top ranked documents by time stamp, news source and cosine similarity. More specifically, we choose the highest ranked retrieved document that simultaneously satisfies the following three conditions: it comes from a different news source; it appears earlier but within two weeks time window of the test document, and the bag of word cosine similarity between the test and the retrieved cannot be larger than 0.6\u03b1 where \u03b1 is the largest bag of word cosine similarity between the test article and any retrieved articles. To support fine-tuning experiments we constructed a corpus of pairs of a query article and a cached retrieved 1 https://github.com/huggingface/pytorch-transformers document. We split the dataset into train/dev/test by query document's time stamp. The train/dev/test size is: 79622,16927,8045. For zero-shot experiments we use the test set of 8045 articles. We do experiments with k \u2208 {1, 2, 5}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "To check the quality of query-retrieved pairs, we randomly sample 100 pairs from dev set and compute the bag of word cosine similarity between the two documents. The mean cosine similarity is 0.15. We also manually inspect them: we ask two NLP researchers to annotate the query-retrieved pair as \"BAD\" or \"OK\" independently, i.e., if two documents are almost duplicates or totally unrelated, then it's \"BAD\", otherwise, it's \"OK\". Among 100 pairs, 83 pairs are \"OK\", 17 pairs are \"BAD\" due to irrelevance. The Cohen's kappa coefficient between two annotations is 0.94.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "For language modeling we try zero-shot memory augmentation, fine-tuned memory augmentation, and training a small memory-augmented network from scratch. When training, we use the Adam optimizer from GPT 1.0 (Radford et al., 2018b) . The learning rate is 0.001, weight decay parameter is 0.01, the warm up proportion is 0.1. For other parameters, we use the default values from GPT 2.0. The fine-tuning on Gigaword takes less than one day with a single GPU.", "cite_spans": [ { "start": 206, "end": 229, "text": "(Radford et al., 2018b)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Language modelling", "sec_num": "4.1" }, { "text": "Zero-shot and fine-tuning results Following Radford et al. (2019), we first evaluate our model on Gigaword with zero-shot setting and then finetune the model. The results are given in Table 2 Table 2 : Perplexity for zero-shot (top 3 rows) and finetuning (last row) settings when use different k to retrieve the context. woc: without retrieved context.", "cite_spans": [], "ref_spans": [ { "start": 184, "end": 191, "text": "Table 2", "ref_id": null }, { "start": 192, "end": 199, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Language modelling", "sec_num": "4.1" }, { "text": "From Table 2 , we see that with additional context retrieved from episodic memory, for all different GPT models, we obtain significantly lower perplexity than using original GPT 2.0. When fine tuning the model with context, we can further reduce the overall perplexity. We only fine tune GPT small due to our GPU memory constraints. Preliminary analysis indicates that most of the perplexity reduction comes at content words and semantically rich words where predictions require broader context. This is consistent with the phenomena found in Khandelwal et al. (2018) . We further find that smaller k leads to slightly worse retrieval quality, however, more continued sentences will benefit from the retrieved context. Since Gigaword contains newswire, the first several sentences usually are importation summarizations, thus overall, smaller k will result in lower perplexity.", "cite_spans": [ { "start": 543, "end": 567, "text": "Khandelwal et al. (2018)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Language modelling", "sec_num": "4.1" }, { "text": "Train from scratch We also investigate training this form of memory-augmented model from scratch on our query-retrieved pairs. For these experiments we train smaller transformers and the results are given in Table 3 . From Table 3 , we see that additional context still helps and we can get decent perplexity even with quite small models. Table 3 : Perplexity when train from scratch. E: hidden states dimensionality; H: # of head; L: # of layer. GPT-Small has the configuration: E=764, H=12, L=12.", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 215, "text": "Table 3", "ref_id": null }, { "start": 223, "end": 230, "text": "Table 3", "ref_id": null }, { "start": 339, "end": 346, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Language modelling", "sec_num": "4.1" }, { "text": "When context is irrelevant We also evaluate our method on Wikitext-2/103, in which the retrieved context is irrelevant due to domain difference between Wikipedia and Gigaword. In this case, we use the most top ranked document from Gigaword as reference. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Intuitively episodic memory is useful because it contains information about the particular events mentioned in the test document. With this in mind we evaluate our approach on the event co-reference dataset ECB+ (Cybulska and Vossen, 2014) . ECB+ contains 982 documents clustered into 43 topics, and has two evaluation settings: coreferring mentions occurring within a single document (within document) or across a document collection (cross document). For the event co-reference pipeline, we follow the joint modeling method of Barhom et al. (2019) where they jointly represented entity and event mentions with various features and learned a pairwise mention/entity scorer for coreference classification. We augment their mention features with the mention's vector representations extracted from either GPT 2.0 or our zero-shot augmented GPT 2.0. For event co-reference, we use the whole test document to retrieve the context from Gigaword. From Table 5 , we see that the context can help boost the CONLL F1 score. Table 5 : F1 score on ECB+ dataset. KCP: Kenyon-Dean et al. (2018) where they add a clustering-oriented regularization term; CV: Cybulska and Vossen (2015) where they add the feature calculated from \"event template\"; JM: Barhom et al. (2019) . \u2663: we also feed the retrieved context to GPT to get the representation.", "cite_spans": [ { "start": 212, "end": 239, "text": "(Cybulska and Vossen, 2014)", "ref_id": "BIBREF3" }, { "start": 529, "end": 549, "text": "Barhom et al. (2019)", "ref_id": "BIBREF0" }, { "start": 1057, "end": 1082, "text": "Kenyon-Dean et al. (2018)", "ref_id": "BIBREF10" }, { "start": 1145, "end": 1171, "text": "Cybulska and Vossen (2015)", "ref_id": "BIBREF4" }, { "start": 1237, "end": 1257, "text": "Barhom et al. (2019)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 947, "end": 954, "text": "Table 5", "ref_id": null }, { "start": 1016, "end": 1023, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Event Co-reference", "sec_num": "4.2" }, { "text": "In this paper we propose a method to augment a pre-trained NLP model with a large episodic memory. Unlike previous work, we use information retrieval to handle a large external corpus of text and feed retrieved documents directly to language models. Evaluation results on language modelling and event co-reference show the promise of our method. To the best of our knowledge, this is the first work that augments pre-trained NLP models with large episodic memory. In principle, the memory-augmented GPT-2 can be used as a variant of GPT-2 for any downstream tasks, such as GLUE tasks ), although we have not experimented with that here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Revisiting joint modeling of cross-document entity and event coreference resolution", "authors": [ { "first": "Shany", "middle": [], "last": "Barhom", "suffix": "" }, { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Eirew", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bugert", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "4179--4189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Re- visiting joint modeling of cross-document entity and event coreference resolution. pages 4179-4189.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Hierarchical memory networks", "authors": [ { "first": "Sarath", "middle": [], "last": "Chandar", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Ahn", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Vincent", "suffix": "" }, { "first": "Gerald", "middle": [], "last": "Tesauro", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1605.07427" ] }, "num": null, "urls": [], "raw_text": "Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. 2016. Hierarchical memory networks. arXiv preprint arXiv:1605.07427.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural natural language inference models enhanced with external knowledge", "authors": [ { "first": "Qian", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Si", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2406--2417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2406-2417.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution", "authors": [ { "first": "Agata", "middle": [], "last": "Cybulska", "suffix": "" }, { "first": "Piek", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)", "volume": "", "issue": "", "pages": "4545--4552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agata Cybulska and Piek Vossen. 2014. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC-2014), pages 4545- 4552.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Translating granularity of event slots into features for event coreference resolution", "authors": [ { "first": "Agata", "middle": [], "last": "Cybulska", "suffix": "" }, { "first": "Piek", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the the 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agata Cybulska and Piek Vossen. 2015. Translating granularity of event slots into features for event coreference resolution. In Proceedings of the the 3rd Workshop on EVENTS: Definition, Detection, Coref- erence, and Representation, pages 1-10.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning", "authors": [ { "first": "Rajarshi", "middle": [], "last": "Das", "suffix": "" }, { "first": "Shehzaad", "middle": [], "last": "Dhuliawala", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Ishan", "middle": [], "last": "Durugkar", "suffix": "" }, { "first": "Akshay", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1711.05851" ] }, "num": null, "urls": [], "raw_text": "Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishna- murthy, Alex Smola, and Andrew McCallum. 2017. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. arXiv preprint arXiv:1711.05851.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Linguistic knowledge as memory for recurrent neural networks", "authors": [ { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "W", "middle": [], "last": "William", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.02620" ] }, "num": null, "urls": [], "raw_text": "Bhuwan Dhingra, Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. 2017. Linguistic knowledge as memory for recurrent neural networks. arXiv preprint arXiv:1703.02620.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Contextual lstm (clstm) models for large scale nlp tasks", "authors": [ { "first": "Shalini", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Strope", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Larry", "middle": [], "last": "Heck", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1602.06291" ] }, "num": null, "urls": [], "raw_text": "Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and Larry Heck. 2016. Contextual lstm (clstm) models for large scale nlp tasks. arXiv preprint arXiv:1602.06291.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improving neural language models with a continuous cache", "authors": [ { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1612.04426" ] }, "num": null, "urls": [], "raw_text": "Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving neural language models with a con- tinuous cache. arXiv preprint arXiv:1612.04426.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Memory augmented neural networks with wormhole connections", "authors": [ { "first": "Caglar", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "Sarath", "middle": [], "last": "Chandar", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1701.08718" ] }, "num": null, "urls": [], "raw_text": "Caglar Gulcehre, Sarath Chandar, and Yoshua Ben- gio. 2017. Memory augmented neural net- works with wormhole connections. arXiv preprint arXiv:1701.08718.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Resolving event coreference with supervised representation learning and clusteringoriented regularization", "authors": [ { "first": "Kian", "middle": [], "last": "Kenyon-Dean", "suffix": "" }, { "first": "Jackie Chi Kit", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Doina", "middle": [], "last": "Precup", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kian Kenyon-Dean, Jackie Chi Kit Cheung, and Doina Precup. 2018. Resolving event coreference with supervised representation learning and clustering- oriented regularization. In Proceedings of the Sev- enth Joint Conference on Lexical and Computa- tional Semantics, pages 1-10.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sharp nearby, fuzzy far away: How neural language models use context", "authors": [ { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "He", "middle": [], "last": "He", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "284--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Urvashi Khandelwal, He He, Peng Qi, and Dan Juraf- sky. 2018. Sharp nearby, fuzzy far away: How neu- ral language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284-294.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Ask me anything: Dynamic memory networks for natural language processing", "authors": [ { "first": "Ankit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Ozan", "middle": [], "last": "Irsoy", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Ondruska", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Ishaan", "middle": [], "last": "Gulrajani", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2016, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "1378--1387", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016a. Ask me anything: Dynamic memory networks for natu- ral language processing. In International conference on machine learning, pages 1378-1387.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Ask me anything: Dynamic memory networks for natural language processing", "authors": [ { "first": "Ankit", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Ozan", "middle": [], "last": "Irsoy", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Ondruska", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Ishaan", "middle": [], "last": "Gulrajani", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2016, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "1378--1387", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016b. Ask me anything: Dynamic memory networks for natu- ral language processing. In International conference on machine learning, pages 1378-1387.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Large memory layers with product keys", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Sablayrolles", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8546--8557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2019. Large memory layers with product keys. In Advances in Neural Information Processing Systems, pages 8546-8557.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Topically driven neural language model", "authors": [ { "first": "Timothy", "middle": [], "last": "Jey Han Lau", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "355--365", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jey Han Lau, Timothy Baldwin, and Trevor Cohn. 2017. Topically driven neural language model. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 355-365.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Visual question answering with memory-augmented networks", "authors": [ { "first": "Chao", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Chunhua", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Dick", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6975--6984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chao Ma, Chunhua Shen, Anthony Dick, Qi Wu, Peng Wang, Anton van den Hengel, and Ian Reid. 2018. Visual question answering with memory-augmented networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6975-6984.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Key-value memory networks for directly reading documents", "authors": [ { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "", "middle": [], "last": "Amir-Hossein", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1400--1409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly read- ing documents. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1400-1409.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A read-write memory network for movie story understanding", "authors": [ { "first": "Seil", "middle": [], "last": "Na", "suffix": "" }, { "first": "Sangho", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Jisung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Gunhee", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "677--685", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seil Na, Sangho Lee, Jisung Kim, and Gunhee Kim. 2017. A read-write memory network for movie story understanding. In Proceedings of the IEEE Interna- tional Conference on Computer Vision, pages 677- 685.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The lambada dataset: Word prediction requiring a broad discourse context", "authors": [ { "first": "Denis", "middle": [], "last": "Paperno", "suffix": "" }, { "first": "Germ\u00e1n", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Ngoc", "middle": [ "Quan" ], "last": "Pham", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Sandro", "middle": [], "last": "Pezzelle", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Boleda", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Fernandez", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1525--1534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denis Paperno, Germ\u00e1n Kruszewski, Angeliki Lazari- dou, Ngoc Quan Pham, Raffaella Bernardi, San- dro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1525-1534.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "English gigaword. Linguistic Data Consortium", "authors": [ { "first": "Robert", "middle": [], "last": "Parker", "suffix": "" }, { "first": "David", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Kazuaki", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword. Linguis- tic Data Consortium.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Extending neural generative conversational model using external knowledge sources", "authors": [ { "first": "Prasanna", "middle": [], "last": "Parthasarathi", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "690--695", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prasanna Parthasarathi and Joelle Pineau. 2018. Ex- tending neural generative conversational model us- ing external knowledge sources. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 690-695.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "KnowSemLM: A Knowledge Infused Semantic Language Model", "authors": [ { "first": "Haoruo", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Ning", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "Proc. of the Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haoruo Peng, Qiang Ning, and Dan Roth. 2019. KnowSemLM: A Knowledge Infused Semantic Lan- guage Model. In Proc. of the Conference on Compu- tational Natural Language Learning (CoNLL).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018a. Improving language under- standing by generative pre-training. OpenAI Blog.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018b. Improving language under- standing by generative pre-training. In Preprint.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Scaling memory-augmented neural networks with sparse reads and writes", "authors": [ { "first": "Jack", "middle": [], "last": "Rae", "suffix": "" }, { "first": "Jonathan", "middle": [ "J" ], "last": "Hunt", "suffix": "" }, { "first": "Ivo", "middle": [], "last": "Danihelka", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Harley", "suffix": "" }, { "first": "Andrew", "middle": [ "W" ], "last": "Senior", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Wayne", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Lillicrap", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3621--3629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Rae, Jonathan J Hunt, Ivo Danihelka, Timo- thy Harley, Andrew W Senior, Gregory Wayne, Alex Graves, and Timothy Lillicrap. 2016. Scal- ing memory-augmented neural networks with sparse reads and writes. In Advances in Neural Information Processing Systems, pages 3621-3629.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Hierarchical memory networks for answer selection on unknown words", "authors": [ { "first": "Jing", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Yiqun", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Suncong", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COL-ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2290--2299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Shi, Yiqun Yao, Suncong Zheng, Bo Xu, et al. 2016. Hierarchical memory networks for answer se- lection on unknown words. In Proceedings of COL- ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2290-2299.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Learning visual knowledge memory networks for visual question answering", "authors": [ { "first": "Zhou", "middle": [], "last": "Su", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yinpeng", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Dongqi", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Yurong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianguo", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "7736--7745", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou Su, Chen Zhu, Yinpeng Dong, Dongqi Cai, Yurong Chen, and Jianguo Li. 2018. Learning vi- sual knowledge memory networks for visual ques- tion answering. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, pages 7736-7745.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text", "authors": [ { "first": "Haitian", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Tania", "middle": [], "last": "Bedrax-Weiss", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.09537" ] }, "num": null, "urls": [], "raw_text": "Haitian Sun, Tania Bedrax-Weiss, and William W Co- hen. 2019. Pullnet: Open domain question answer- ing with iterative retrieval on knowledge bases and text. arXiv preprint arXiv:1904.09537.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Open domain question answering using early fusion of knowledge bases and text", "authors": [ { "first": "Haitian", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Bhuwan", "middle": [], "last": "Dhingra", "suffix": "" }, { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Mazaitis", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4231--4242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231-4242.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Representing text for joint embedding of text and knowledge bases", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Pallavi", "middle": [], "last": "Choudhury", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1499--1509", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1499-1509.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "353--355", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 353-355.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Leveraging knowledge bases in lstms for improving machine reading", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1436--1446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1436-1446.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Enhancing unsupervised pretraining with external knowledge for natural language inference", "authors": [ { "first": "Xiaoyu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Huasha", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Qiong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yufei", "middle": [], "last": "Feng", "suffix": "" } ], "year": 2019, "venue": "Canadian Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "413--419", "other_ids": { "DOI": [ "https://link.springer.com/chapter/10.1007/978-3-030-18305-9_38" ] }, "num": null, "urls": [], "raw_text": "Xiaoyu Yang, Xiaodan Zhu, Huasha Zhao, Qiong Zhang, and Yufei Feng. 2019. Enhancing unsuper- vised pretraining with external knowledge for natu- ral language inference. In Canadian Conference on Artificial Intelligence, pages 413-419. Springer.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "GPT with large episodic memory component", "num": null, "type_str": "figure" }, "TABREF0": { "content": "", "text": "compares features of our simple memory augmentation with those of other memory models.", "num": null, "type_str": "table", "html": null }, "TABREF2": { "content": "
Model Sizewock=1k=2k=5
GPT-Small35.15 29.29 30.54 32.38
GPT-Medium 22.78 19.84 20.54 21.48
GPT-Large19.90 17.41 18.00 18.80
GPT-Small23.03 21.01 21.89 22.66
", "text": ".", "num": null, "type_str": "table", "html": null }, "TABREF4": { "content": "
Datasetwock=1k=2k=5
Wikitext-228.67 28.96 28.95 28.70
Wikitext-103 25.38 25.68 25.56 25.39
", "text": "shows that irrelevant contexts have very little impact on perplexity.", "num": null, "type_str": "table", "html": null }, "TABREF5": { "content": "", "text": "Zero-shot perplexity using GPT-Small", "num": null, "type_str": "table", "html": null } } } }