{ "paper_id": "D17-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:14:19.182851Z" }, "title": "Efficient Attention using a Fixed-Size Memory Representation", "authors": [ { "first": "Denny", "middle": [], "last": "Britz", "suffix": "", "affiliation": {}, "email": "dennybritz@google.com" }, { "first": "Melody", "middle": [ "Y" ], "last": "Guan", "suffix": "", "affiliation": {}, "email": "melodyguan@google.com" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.", "pdf_parse": { "paper_id": "D17-1040", "_pdf_hash": "", "abstract": [ { "text": "The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sequence-to-sequence models (Sutskever et al., 2014; have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) (Bahdanau et al., 2014; Wu et al., 2016 ), text summarization (Rush et al., 2015; Nallapati et al., 2016) , speech recognition (Chan et al., 2015; Chorowski and Jaitly, 2016) , image captioning (Xu et al., 2015) , and conversational modeling Li et al., 2015) .", "cite_spans": [ { "start": 28, "end": 52, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF20" }, { "start": 167, "end": 190, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 191, "end": 206, "text": "Wu et al., 2016", "ref_id": "BIBREF23" }, { "start": 229, "end": 248, "text": "(Rush et al., 2015;", "ref_id": "BIBREF17" }, { "start": 249, "end": 272, "text": "Nallapati et al., 2016)", "ref_id": "BIBREF15" }, { "start": 294, "end": 313, "text": "(Chan et al., 2015;", "ref_id": null }, { "start": 314, "end": 341, "text": "Chorowski and Jaitly, 2016)", "ref_id": "BIBREF6" }, { "start": 361, "end": 378, "text": "(Xu et al., 2015)", "ref_id": "BIBREF24" }, { "start": 409, "end": 425, "text": "Li et al., 2015)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The most popular approaches are based on an encoder-decoder architecture consisting of two recurrent neural networks (RNNs) and an attention mechanism that aligns target to source tokens (Bahdanau et al., 2014; Luong et al., 2015) . The typical attention mechanism used in these architectures computes a new attention context at each decoding step based on the current state of the decoder. Intuitively, this corresponds to looking at the source sequence after the output of every single target token.", "cite_spans": [ { "start": 187, "end": 210, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 211, "end": 230, "text": "Luong et al., 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inspired by how humans process sentences, we believe it may be unnecessary to look back at the entire original source sequence at each step. 1 We thus propose an alternative attention mechanism (section 3) that leads to smaller computational time complexity. Our method predicts K attention context vectors while reading the source, and learns to use a weighted average of these vectors at each step of decoding. Thus, we avoid looking back at the source sequence once it has been encoded. We show (section 4) that this speeds up inference while performing on-par with the standard mechanism on both toy and real-world WMT translation datasets. We also show that our mechanism leads to larger speedups as sequences get longer. Finally, by visualizing the attention scores (section 5), we verify that the proposed technique learns meaningful alignments, and that different attention context vectors specialize on different parts of the source.", "cite_spans": [ { "start": 141, "end": 142, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our models are based on an encoder-decoder architecture with attention mechanism (Bahdanau et al., 2014; Luong et al., 2015) . An encoder function takes as input a sequence of source tokens x=(x 1 ,...,x m ) and produces a sequence of states s=(s 1 ,...,s m ) .The decoder is an RNN that predicts the probability of a target sequence y =(y 1 ,...,y T |s). The probability of each target token y i \u2208 {1,...,|V |} is predicted based on the recurrent state in the decoder RNN, h i , the previous words, y ModelDecoding Time (s)K =3226.85K =6427.13Attention33.28", "type_str": "table" } } } }