{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:51.246084Z" }, "title": "Predicting Attention Sparsity in Transformers", "authors": [ { "first": "Marcos", "middle": [], "last": "Treviso", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto de Telecomunica\u00e7\u00f5es", "location": { "settlement": "Lisbon", "country": "Portugal" } }, "email": "" }, { "first": "Ant\u00f3nio", "middle": [], "last": "G\u00f3is", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e9 de Montr\u00e9al", "location": { "settlement": "Mila", "country": "Canada" } }, "email": "" }, { "first": "Patrick", "middle": [], "last": "Fernandes", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto de Telecomunica\u00e7\u00f5es", "location": { "settlement": "Lisbon", "country": "Portugal" } }, "email": "" }, { "first": "Erick", "middle": [], "last": "Fonseca", "suffix": "", "affiliation": { "laboratory": "", "institution": "Kaufland e-commerce", "location": { "settlement": "Cologne", "country": "Germany" } }, "email": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "", "affiliation": { "laboratory": "", "institution": "Instituto de Telecomunica\u00e7\u00f5es", "location": { "settlement": "Lisbon", "country": "Portugal" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Transformers' quadratic complexity with respect to the input sequence length has motivated a body of work on efficient sparse approximations to softmax. An alternative path, used by entmax transformers, consists of having built-in exact sparse attention; however this approach still requires quadratic computation. In this paper, we propose Sparsefinder, a simple model trained to identify the sparsity pattern of entmax attention before computing it. We experiment with three variants of our method, based on distances, quantization, and clustering, on two tasks: machine translation (attention in the decoder) and masked language modeling (encoder-only). Our work provides a new angle to study model efficiency by doing extensive analysis of the tradeoff between the sparsity and recall of the predicted attention graph. This allows for detailed comparison between different models along their Pareto curves, important to guide future benchmarks for sparse attention models.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Transformers' quadratic complexity with respect to the input sequence length has motivated a body of work on efficient sparse approximations to softmax. An alternative path, used by entmax transformers, consists of having built-in exact sparse attention; however this approach still requires quadratic computation. In this paper, we propose Sparsefinder, a simple model trained to identify the sparsity pattern of entmax attention before computing it. We experiment with three variants of our method, based on distances, quantization, and clustering, on two tasks: machine translation (attention in the decoder) and masked language modeling (encoder-only). Our work provides a new angle to study model efficiency by doing extensive analysis of the tradeoff between the sparsity and recall of the predicted attention graph. This allows for detailed comparison between different models along their Pareto curves, important to guide future benchmarks for sparse attention models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Transformer-based architectures have achieved remarkable results in many NLP tasks (Vaswani et al., 2017; Devlin et al., 2019; Brown et al., 2020) . However, they also bring important computational and environmental concerns, caused by their quadratic time and memory computation requirements with respect to the sequence length. This comes in addition to the difficulty of interpreting their inner workings, caused by their overparametrization and large number of attention heads.", "cite_spans": [ { "start": 83, "end": 105, "text": "(Vaswani et al., 2017;", "ref_id": "BIBREF28" }, { "start": 106, "end": 126, "text": "Devlin et al., 2019;", "ref_id": "BIBREF9" }, { "start": 127, "end": 146, "text": "Brown et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There is a large body of work developing ways to \"sparsify\" the computation in transformers, either by imposing local or fixed attention patterns (Child et al., 2019; Tay et al., 2020; Zaheer et al., 2020) , by applying low-rank kernel approximations to softmax Choromanski et al., 2021) , * Work done at Instituto de Telecomunica\u00e7\u00f5es. Correspondence to marcos.treviso@tecnico.ulisboa.pt (b) Project query and key vectors to a smaller and appropriated space such that similar points are likely to fall in the same vicinity; (c) Additionally, we can combine window and global patterns (green blocks) with the learned pattern (yellow blocks) to increase the recall in recovering ground-truth edges from the sparse graph at the top (starred blocks).", "cite_spans": [ { "start": 146, "end": 166, "text": "(Child et al., 2019;", "ref_id": "BIBREF4" }, { "start": 167, "end": 184, "text": "Tay et al., 2020;", "ref_id": "BIBREF26" }, { "start": 185, "end": 205, "text": "Zaheer et al., 2020)", "ref_id": "BIBREF36" }, { "start": 262, "end": 287, "text": "Choromanski et al., 2021)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "or by learning which queries and keys should be grouped together (Kitaev et al., 2019; Daras et al., 2020; Roy et al., 2021; Wang et al., 2021) . Most of the existing work seeks to approximate softmaxbased attention by ignoring the (predicted) tails of the distribution, which can lead to performance degradation. An exception is transformers with entmax-based sparse attention (Correia et al., 2019) , a content-based approach which is natively sparse -this approach has the ability to let each attention head learn from data how sparse it should be, eliminating the need for heuristics or approximations. The disadvantage of this approach is that it still requires a quadratic computation to determine the sparsity pattern, failing to take computational advantage of attention sparsity.", "cite_spans": [ { "start": 65, "end": 86, "text": "(Kitaev et al., 2019;", "ref_id": "BIBREF13" }, { "start": 87, "end": 106, "text": "Daras et al., 2020;", "ref_id": "BIBREF7" }, { "start": 107, "end": 124, "text": "Roy et al., 2021;", "ref_id": "BIBREF23" }, { "start": 125, "end": 143, "text": "Wang et al., 2021)", "ref_id": "BIBREF32" }, { "start": 378, "end": 400, "text": "(Correia et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose Sparsefinder, which fills the gap above by making entmax attention more efficient ( \u00a74). Namely, we investigate three methods to predict the sparsity pattern of entmax without having to compute it: one based on metric learning, which is still quadratic but with a better constant ( \u00a74.3), one based on quantization ( \u00a74.4), and another based on clustering ( \u00a74.5). In all cases, the predictors are trained offline on ground-truth sparse attention graphs from an entmax transformer, seeking high recall in their predicted edges without compromising the total amount of sparsity. Figure 1 illustrates our method.", "cite_spans": [], "ref_spans": [ { "start": 604, "end": 612, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "More precisely, to evaluate the effectiveness of our method across different scenarios, we perform experiments on two NLP tasks, encompassing encoder-only and decoder-only configurations: machine translation (MT, \u00a75) and masked language modeling (MLM, \u00a76), doing an extensive analysis of the tradeoff between sparsity and recall (i.e., performance on the attention graph approximation), and sparsity and accuracy (performance on downstream tasks). We compare our method with four alternative solutions based on efficient transformers: Longformer (Beltagy et al., 2020) , Bigbird (Zaheer et al., 2020), Reformer , and Routing Transformer (Roy et al., 2021) , along their entire Pareto curves. We complement these experiments by analyzing qualitatively what is selected by the different attention heads at the several layers and represented in different clusters/buckets. Overall, our contributions are: 1", "cite_spans": [ { "start": 546, "end": 568, "text": "(Beltagy et al., 2020)", "ref_id": "BIBREF1" }, { "start": 637, "end": 655, "text": "(Roy et al., 2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a simple method that exploits learnable sparsity patterns to efficiently compute multi-head attention ( \u00a74).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We do an extensive analysis of the tradeoff between sparsity and recall, and sparsity and accuracy in MT ( \u00a75) and MLM ( \u00a76), showing that there is clear room for improvement in the design of efficient transformers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We qualitatively analyze what is selected by the different attention heads at various layers and represented in different clusters/buckets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Interpreting multi-head attention. Several works analyze the functionalities learned by different attention heads, such as positional and local context patterns (Raganato and Tiedemann, 2018; Voita et al., 2019) . Building upon prior work on sparse attention mechanisms (Peters et al., 2019) , Correia et al. (2019) constrain the attention heads to induce sparse selections individually for each head, bringing interpretability without post-hoc manipulation. Related approaches include the explicit sparse transformer (Zhao et al., 2019) and rectified linear attention (Zhang et al., 2021) , which drops the normalization constraint. Raganato et al. (2020) show that it is possible to fix attention patterns based on previously known behavior (e.g. focusing on previous token) while improving translation quality. However, a procedure that exploits learnable sparsity patterns to accelerate multi-head attention is still missing.", "cite_spans": [ { "start": 161, "end": 191, "text": "(Raganato and Tiedemann, 2018;", "ref_id": "BIBREF22" }, { "start": 192, "end": 211, "text": "Voita et al., 2019)", "ref_id": "BIBREF29" }, { "start": 270, "end": 291, "text": "(Peters et al., 2019)", "ref_id": "BIBREF19" }, { "start": 294, "end": 315, "text": "Correia et al. (2019)", "ref_id": "BIBREF6" }, { "start": 518, "end": 537, "text": "(Zhao et al., 2019)", "ref_id": "BIBREF38" }, { "start": 569, "end": 589, "text": "(Zhang et al., 2021)", "ref_id": "BIBREF37" }, { "start": 634, "end": 656, "text": "Raganato et al. (2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Low-rank softmax approximations. Methods based on low-rank approximation to the softmax such as Linearized Attention , Linformer , and Performer (Choromanski et al., 2021) reduce both speed and memory complexity of the attention mechanism from quadratic to linear, but make interpretability more challenging because the scores are not computed explicitly. On the other hand, methods that focus on inducing sparse patterns provide interpretable alignments and also have performance gains in terms of speed and memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Fixed attention patterns. Among fixed pattern methods, Sparse Transformer (Child et al., 2019) and LongFormer (Beltagy et al., 2020) attend to fixed positions by using strided/dilated sliding windows. BigBird uses random and two fixed patterns (global and window) to build a block sparse matrix representation (Zaheer et al., 2020) , taking advantage of block matrix operations to accelerate GPU computations. In contrast, we replace the random pattern with a learned pattern that mimics pretrained \u03b1-entmax sparse attention graphs.", "cite_spans": [ { "start": 74, "end": 94, "text": "(Child et al., 2019)", "ref_id": "BIBREF4" }, { "start": 110, "end": 132, "text": "(Beltagy et al., 2020)", "ref_id": "BIBREF1" }, { "start": 310, "end": 331, "text": "(Zaheer et al., 2020)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Learnable attention patterns. Learnable pattern methods usually have to deal with assignment decisions within the multi-head attention mechanism. Clustered Attention groups query tokens into clusters and computes dot-products only with centroids. Reformer (Kitaev et al., 2020) and SMYRF (Daras et al., 2020) use locality-sensitive hashing to efficiently group tokens in buckets. More similar to our work, Routing Transformer (Roy et al., 2021) and Cluster-Former (Wang et al., 2021) cluster queries and keys with online k-means and compute dot-products over the top-k cluster points. Some queries and keys are discarded due to this filtering, which affects the overall recall of the method (as we show in \u00a75 and \u00a76). The ability of Routing Transformer to benefit from contextual information has been analyzed by . In contrast, Sparsefinder learns to cluster based on sparsity patterns from attention graphs generated by \u03b1-entmax.", "cite_spans": [ { "start": 288, "end": 308, "text": "(Daras et al., 2020)", "ref_id": "BIBREF7" }, { "start": 426, "end": 444, "text": "(Roy et al., 2021)", "ref_id": "BIBREF23" }, { "start": 464, "end": 483, "text": "(Wang et al., 2021)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Background", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The main component of transformers is the multihead attention mechanism (Vaswani et al., 2017) . Given as input a matrix Q \u2208 R n\u00d7d containing d-dimensional representations for n queries, and matrices K, V \u2208 R m\u00d7d for m keys and values, the scaled dot-product attention at a single head is computed in the following way:", "cite_spans": [ { "start": 72, "end": 94, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "3.1" }, { "text": "att(Q, K, V) = \u03c0 QK \u22a4 \u221a d Z\u2208R n\u00d7m V \u2208 R n\u00d7d . (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "3.1" }, { "text": "The \u03c0 transformation maps rows to distributions, with softmax being the most common choice, \u03c0(Z) ij = softmax(z i ) j . Multi-head attention is computed by evoking Eq. 1 in parallel for each head h:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "3.1" }, { "text": "head h (Q, K, V) = att(QW Q h , KW K h , VW V h ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "3.1" }, { "text": "W Q h , W K h , W V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "3.1" }, { "text": "h are learned linear transformations. This way, heads are able to learn specialized phenomena. According to the nature of the input, transformers have three types of multihead attention mechanism: encoder self-attention (source-to-source), decoder self-attention (targetto-target), and decoder cross-attention (target-tosource). While there are no restrictions to which elements can be attended to in the encoder, elements in position j > i in the decoder self-attention are masked at timestep i (\"causal mask\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "3.1" }, { "text": "The main computational bottleneck in transformers is the matrix multiplication QK \u22a4 in Eq. 1, which costs O(nmd) time and can be impractical when n and m are large. Many approaches, discussed in \u00a72, approximate Eq. 1 by ignoring entries far from the main diagonal or computing only some blocks of this matrix, with various heuristics. By doing so, the result will be an approximation of the softmax attention in Eq. 1. This is because the original softmax-based attention is dense, i.e., it puts some probability mass on all tokens -not only a computational disadvantage, but also making interpretation harder, as it has been observed that only a small fraction of attention heads capture relevant information (Voita et al., 2019) . An alternative to softmax is the \u03b1-entmax transformation (Peters et al., 2019; Correia et al., 2019) , which leads to sparse patterns directly, without any approximation:", "cite_spans": [ { "start": 710, "end": 730, "text": "(Voita et al., 2019)", "ref_id": "BIBREF29" }, { "start": 790, "end": 811, "text": "(Peters et al., 2019;", "ref_id": "BIBREF19" }, { "start": 812, "end": 833, "text": "Correia et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Extmax Transformers and Learned Sparsity", "sec_num": "3.2" }, { "text": "\u03b1-entmax(z) = [(\u03b1 \u2212 1)z \u2212 \u03c4 (z)1] 1 /\u03b1\u22121 + , (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extmax Transformers and Learned Sparsity", "sec_num": "3.2" }, { "text": "where [\u2022] + is the positive part (ReLU) function, and", "cite_spans": [ { "start": 6, "end": 9, "text": "[\u2022]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Extmax Transformers and Learned Sparsity", "sec_num": "3.2" }, { "text": "\u03c4 : R n \u2192 R is a normalizing function satisfying j [(\u03b1 \u2212 1)z j \u2212 \u03c4 (z)] 1 /\u03b1\u22121 + = 1 for any z.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extmax Transformers and Learned Sparsity", "sec_num": "3.2" }, { "text": "That is, entries with score z j \u2264 \u03c4 (z) /\u03b1\u22121 get exactly zero probability. In the limit \u03b1 \u2192 1, \u03b1-entmax recovers the softmax function, while for any value of \u03b1 > 1 this transformation can return sparse probability vectors (as the value of \u03b1 increases, the induced probability distribution becomes more sparse). When \u03b1 = 2, we recover sparsemax (Martins and Astudillo, 2016). In this paper, we use \u03b1 = 1.5, which works well in practice and has a specialized fast algorithm (Peters et al., 2019) .", "cite_spans": [ { "start": 472, "end": 493, "text": "(Peters et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Extmax Transformers and Learned Sparsity", "sec_num": "3.2" }, { "text": "Although sparse attention improves interpretability and head diversity when compared to dense alternatives (Correia et al., 2019), the learned sparsity patterns cannot be trivially exploited to reduce the quadratic burden of self-attention, since we still need to compute dot-products between all queries and keys (QK \u22a4 ) before applying the \u03b1-entmax transformation. In the next section ( \u00a74), we propose a simple method that learns to identify these sparsity patterns beforehand, avoiding the full matrix multiplication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extmax Transformers and Learned Sparsity", "sec_num": "3.2" }, { "text": "We now propose our method to extract sparse attention graphs and learn where to attend by exploiting a special property of \u03b1-entmax: sparse-consistency ( \u00a74.1). We design three variants of Sparsefinder to that end, based on metric learning ( \u00a74.3), quantization ( \u00a74.4), and clustering ( \u00a74.5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sparsefinder", "sec_num": "4" }, { "text": "For each attention head h, we define its attention graph as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "G h = {(q i , k j ) | p i,j > 0}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": ", a bipartite graph connecting query and key pairs q i , k j \u2208 R d for which the \u03b1-entmax probability p i,j is nonzero. An example of attention graph is shown in Figure 1 . We denote by |G h | the total size of an attention graph, i.e., its number of edges. With \u03b1-entmax with \u03b1 = 1.5 we typically have |G h | \u226a nm. In contrast, softmax attention always leads to a complete graph, |G h | = nm.", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 170, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "Problem statement. Our goal is to build a model -which we call Sparsefinder -that predicts\u011c h \u2248 G h without having to perform all pairwise comparisons between queries and keys. This enables the complexity of evaluating Eq. 1 to be reduced from O(nmd) to O(|\u011c h |d), effectively taking advantage of the sparsity of \u03b1-entmax. In order to learn such a model, we first extract a dataset of sparse attention graphs {G h } from a pretrained entmax-based transformer, which acts as a teacher. Then, the student learns where to pay attention based on this information. This procedure is motivated by the following sparse-consistency property of \u03b1-entmax:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "Proposition 1 (Sparse-consistency property). Let b be a binary vector such that b j = 1 if p \u22c6 j > 0, and b j = 0 otherwise. For any binary mask vector m \"dominated\" by b (i.e. m \u2299 b = b), we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1-entmax(z) = \u03b1-entmax(z| m ),", "eq_num": "(3)" } ], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "z j | m = z j if m j = 1 and \u2212\u221e if m j = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "Proof. See \u00a7A in the supplemental material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "This property ensures that, if\u011c h is such that G h \u2286\u011c h , then we obtain exactly the same result as with the original entmax attention. Therefore, we are interested in having high recall,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "recall(\u011c h ; G h ) = |\u011c h \u2229 G h | |G h | ,", "eq_num": "(4)" } ], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "meaning that our method is nearly exact, and high sparsity,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sparsity(\u011c h ) = 1 \u2212 |\u011c h | nm ,", "eq_num": "(5)" } ], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "which indicates that computation can be made efficient. 2 Although a high sparsity may indicate that many computations can be ignored, converting this theoretical result into efficient computation is not trivial and potentially hardware-dependent. In this paper, rather than proposing a practical computational efficient method, we focus on showing that such methods do exist and that they can be designed to outperform fixed and learned pattern methods while retaining a high amount of sparsity when compared to the ground-truth graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "Our strategies. We teach the student model to predict\u011c h \u2248 G h by taking inspiration from the Reformer model and the Routing Transformer (Roy et al., 2021) . Formally, we define a set of B buckets, B = {1, . . . , B}, and learn functions f q , f k :", "cite_spans": [ { "start": 137, "end": 155, "text": "(Roy et al., 2021)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "R d \u2192 2 B \\ {\u2205}, which", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "assign a query or a key to one or more buckets. We will discuss in the sequel different design strategies for the functions f q , f k . Given these functions, the predicted graph is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G h = {(q i , k j ) | f q (q i ) \u2229 f k (k j ) \u0338 = \u2205},", "eq_num": "(6)" } ], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "that is, an edge is predicted between q i and k j iff they are together in some bucket. We present three strategies, based on distancebased pairing ( \u00a74.3), quantization ( \u00a74.4) and clustering ( \u00a74.5). As a first step, all strategies require learning a metric that embeds the graph (projecting queries and keys) into a lower-dimensional space R r with r \u226a d, such that positive query-key pairs are close to each other, and negative pairs are far apart.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention graph and sparse-consistency", "sec_num": "4.1" }, { "text": "According to the \u03b1-entmax sparse-consistency property, in order to get a good approximation of G h , we would like that f q and f k produce a grap\u0125 G h that maximizes recall, defined in Eq. 4. However, maximizing recall in this setting is difficult since we do not have ground-truth bucket assignments. Instead, we recur to a contrastive learning approach by learning projections via negative sampling, which is simpler and more scalable than constrained clustering approaches (Wagstaff et al., 2001 ; de Amorim, 2012).", "cite_spans": [ { "start": 477, "end": 499, "text": "(Wagstaff et al., 2001", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Learning projections", "sec_num": "4.2" }, { "text": "For each head, we start by projecting the original query and key q, k \u2208", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning projections", "sec_num": "4.2" }, { "text": "R d vectors into lower dimensional vectors q \u2032 , k \u2032 \u2208 R r such that r \u226a d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning projections", "sec_num": "4.2" }, { "text": "In practice, we use a simple head-wise linear projection for all queries and keys g \u03b8 : R d \u2192 R r . To learn the parameters of the projection layer we minimize a hinge loss with margin \u03c9 for each head h:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning projections", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L \u03b8 (G h ) = \u03c9 + \u2225q \u2032 \u2212 k \u2032 P \u2225 2 2 \u2212 \u2225q \u2032 \u2212 k \u2032 N \u2225 2 2 + ,", "eq_num": "(7)" } ], "section": "Learning projections", "sec_num": "4.2" }, { "text": "where (q \u2032 , k \u2032 P ) \u2208 G h is a positive pair and (q \u2032 , k \u2032 N ) / \u2208 G h is a negative pair sampled uniformly at random. In words, we want the distance between a query vector to negative pairs to be larger than the distance to positive pairs by a margin \u03c9. This approach can also be seen as a weakly-supervised learning problem, where the goal is to push dissimilar points away while keeping similar points close to each other (Xing et al., 2002; Weinberger and Saul, 2009; Bellet et al., 2015) .", "cite_spans": [ { "start": 427, "end": 446, "text": "(Xing et al., 2002;", "ref_id": "BIBREF35" }, { "start": 447, "end": 473, "text": "Weinberger and Saul, 2009;", "ref_id": "BIBREF34" }, { "start": 474, "end": 494, "text": "Bellet et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Learning projections", "sec_num": "4.2" }, { "text": "To take advantage of the proximity of data points on the embedded space, we first propose a simple method to connect query and key pairs whose Euclidean distance is less than a threshold t, i.e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distance-based pairing", "sec_num": "4.3" }, { "text": "G h = {(q i , k j ) | \u2225q \u2032 i \u2212 k \u2032 j \u2225 2 \u2264 t}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distance-based pairing", "sec_num": "4.3" }, { "text": "Although this method also requires O(n 2 ) computations, it is more efficient than a vanilla transformer since it reduces computations by a factor of d/r by using the learned projections. This method is also useful to probe the quality of the embedded space learned by the projections, since the recall of our other methods will be contingent on it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distance-based pairing", "sec_num": "4.3" }, { "text": "Our second strategy quantizes each dimension 1, . . . , r of the lower-dimensional space into \u03b2 bins, placing the queries and keys into the corresponding buckets (B = r\u03b2 buckets in total). This way, each q i and k j will be placed in exactly r buckets (one per dimension). If q i and k j are together in some bucket, Sparsefinder predicts that (q i , k j ) \u2208\u011c h . Note that for this quantization strategy no learning is needed, only the hyperparameter \u03b2 and the binning strategy need to be chosen. We propose a fixed-size binning strategy: divide each dimension into \u03b2 bins such that all bins have exactly \u2308n/\u03b2\u2309 elements. In practice, we append padding symbols to the input to ensure that bins are balanced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Buckets through quantization", "sec_num": "4.4" }, { "text": "The clustering strategy uses the low-dimensional projections and runs a clustering algorithm to assign q i and k j to one or more clusters. In this case, each cluster corresponds to a bucket. In our paper, we employed k-means to learn B centroids {c 1 , . . . , c B }, where each c b \u2208 R r , over a small portion of the training set. This strategy is similar to the Routing Transformer's online k-means (Roy et al., 2021) , but with two key differences: (a) our clustering step is applied offline; (b) we assign points to the top-k closest centroids rather than assigning the closest top-k closest points to each centroid, ensuring that all queries are assigned to a cluster. 3 At test time, we use the learned centroids to group queries and keys into k clusters each:", "cite_spans": [ { "start": 403, "end": 421, "text": "(Roy et al., 2021)", "ref_id": "BIBREF23" }, { "start": 676, "end": 677, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Buckets through clustering", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f q (q i ) = arg top-k 1\u2264b\u2264B \u2212\u2225q i \u2212 c b \u2225 2 2 ,", "eq_num": "(8)" } ], "section": "Buckets through clustering", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f k (k j ) = arg top-k 1\u2264b\u2264B \u2212\u2225k j \u2212 c b \u2225 2 2 ,", "eq_num": "(9)" } ], "section": "Buckets through clustering", "sec_num": "4.5" }, { "text": "where the arg top-k operator returns the indices of the k th largest elements. As in the quantizationbased approach, queries and keys will attend to each other, i.e., Sparsefinder predicts (q i , k j ) \u2208\u011c h if they share at least one cluster among the k closest ones. Smaller values of k will induce high sparsity graphs, whereas a larger k is likely to produce a denser graph but with a higher recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Buckets through clustering", "sec_num": "4.5" }, { "text": "Let L be the maximum number of elements in a bucket. The time and memory cost of bucketed attention computed through quantization or clustering is O(BL 2 ). With balanced buckets, we get a complexity of O(n 1.5 ) by setting B = \u221a n. Although this cost is sub-quadratic, leveraging the sparse structure of\u011c h in practice is challenging, since it might require specialized hardware or kernels. In general, we have |\u011c h | = B b=1 n b m b \u226a nm, where n b and m b are the number of queries and keys in each bucket, since we have small complete bipartite graphs on each bucket. Instead of viewing quadratic methods only in light of their performance, we adopt an alternative view of assessing the tradeoff of these methods in terms of sparsity and recall of their approximation\u011c h . This offers a theoretical perspective to the potential performance of each approximation on downstream tasks, helping to find the best approximations for a desired level of sparsity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational cost", "sec_num": "4.6" }, { "text": "As pointed out in prior work (Voita et al., 2019) , several attention heads rely strongly in local patterns or prefer to attend to a particular position, more promimently in initial layers. Therefore, we take inspiration from the Longformer (Beltagy et al., 2020) and BigBird (Zaheer et al., 2020) and combine learned sparse patterns with window and global patterns by adding connections in the predicted graph\u011c h to improve the recall of all methods. Figure 1 illustrates how these patterns are combined in the last step.", "cite_spans": [ { "start": 29, "end": 49, "text": "(Voita et al., 2019)", "ref_id": "BIBREF29" }, { "start": 241, "end": 263, "text": "(Beltagy et al., 2020)", "ref_id": "BIBREF1" }, { "start": 276, "end": 297, "text": "(Zaheer et al., 2020)", "ref_id": "BIBREF36" } ], "ref_spans": [ { "start": 452, "end": 460, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Combining learned and fixed patterns", "sec_num": "4.7" }, { "text": "Setup. We pretrain a transformer-large model (6 layers, 16 heads) on the Paracrawl dataset (Espl\u00e0 et al., 2019) . Next, we finetune it with \u03b1-entmax, fixing \u03b1 = 1.5 for all heads, on EN\u2192DE and EN\u2192FR language pairs from IWSLT17 (Cettolo et al., 2017) . We use the 2011-2014 sets as validation data and the 2015 set as test data. We encode each word using byte pair encoding (BPE, Sennrich et al. 2016) with a joint segmentation of 32k merges. As Vaswani et al. (2017) , we finetune our models using the Adam optimizer with an inverse square root learning rate scheduler, with an initial value of 5 \u00d7 10 \u22124 and a linear warm-up in the first 4000 steps. We evaluate translation quality with sacreBLEU (Post, 2018) . Training details, hyperparameters, and data statistics are described in \u00a7C.", "cite_spans": [ { "start": 91, "end": 111, "text": "(Espl\u00e0 et al., 2019)", "ref_id": "BIBREF10" }, { "start": 227, "end": 249, "text": "(Cettolo et al., 2017)", "ref_id": "BIBREF3" }, { "start": 445, "end": 466, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF28" }, { "start": 698, "end": 710, "text": "(Post, 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Learning projections. To learn projections for queries and keys ( \u00a74.2), we randomly selected 10K long instances (n > 20 tokens) from the training set and extracted the \u03b1-entmax attention graphs G h from the decoder self-attention for each head. This led to an average of 8M and 9M positive pairs (q i , k j ) per layer for EN\u2192DE and EN\u2192FR, respectively. In practice, due to the small number of parameters for each head (only 4,160), a single epoch with Adam was sufficient to optimize the loss in Eq. 7. The hyperparameters and the training details for learning projections can be found in \u00a7C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Pareto-curves. Using the learned projections, we investigate the recall and the accuracy of all Sparsefinder variants by comparing them with Longformer, BigBird, Reformer, and Routing Transformer. To get a fair comparison, we analyze each method for different levels of sparsity by varying the following hyperparameters:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "\u2022 Distance-based methods: the threshold t within {0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "\u2022 Bucketing-based methods: the number of buckets B within {2, 4, 6, 8, 10, 12, 16, 20}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "\u2022 Fixed-pattern methods: the number of random blocks of size 1 within {2, 4, 6, 8, 10, 12, 16, 20} for BigBird; and the number of random global tokens within {2, 4, 6, 8, 10, 12, 16, 20} for Longformer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "We also add global and local patterns to all methods, varying the window size within {0, 1, 3, 5, 7, 9, 11, 15, 19, 23, 27} to get different levels of locality. We further compare all methods with a simple window baseline that only induces the window and global patterns. Since all methods exhibit a tradeoff between sparsity and recall/accuracy, we plot the scores obtained by varying the hyperparameters and draw their respective Pareto frontier to see the optimal Pareto-curve.", "cite_spans": [ { "start": 92, "end": 123, "text": "3, 5, 7, 9, 11, 15, 19, 23, 27}", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Methods whose points lie below this frontier are said to be Pareto-dominated, meaning that their recall/accuracy cannot be increased without sacrificing sparsity, or vice-versa. Concretely, each point on the curve is measured as a function of the approximation to the ground-truth \u03b1-entmax attention graph G h by replacing it by\u011c h at test time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Sparsity-recall tradeoff. Pareto-curves for the sparsity-recall tradeoff are shown on the left of Figure 2 for both language pairs. Overall, both language pairs have similar trends for all methods. Sparsefinder's distance-based and clustering approaches Pareto-dominates the other methods, followed by Routing Transformer. Interestingly, Longformer, BigBird, Routing Transformer, and Sparsefinder's bucketing approach perform on par with the baseline, indicating that a simple local window is a hard baseline to beat. Since the LSH attention in Reformer shares queries and keys before hashing, the resultant buckets are also shared for queries and keys, explaining the high recall and the low sparsity of Reformer.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 106, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Sparsity-accuracy tradeoff. We show the tradeoff between sparsity and BLEU on the right of Figure 2 . For lower levels of sparsity, all methods perform well, close to the full entmax transformer. But as sparsity increases, indicating that only a few computations are necessary, we see that the distance-based and k-means variants of Sparsefinder Pareto-dominate other methods, keeping a very high BLEU without abdicating sparsity.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "In particular, Sparsefinder's distance and clustering approaches perform on par with the full entmax transformer when the amount of sparsity is close to the original entmax transformer (around the vertical dashed line). Overall, these plots show that methods with a high recall for higher levels of sparsity also tend to have a higher BLEU score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Learned patterns. We select some heads and show in Figure 3 examples of the pattern learned by our k-means variant on EN\u2192FR. More examples can be found in \u00a7E. We note that the window pattern is useful to recover local connections. We can see that the k-means variant groups more query and key pairs than the actual number of groundtruth edges (left plots). However, due to the sparseconsistency property (right plots), most of these predictions receive zero probability by \u03b1-entmax, resulting in a very accurate approximation. 6 Experiments: Masked LM Setup. Following Beltagy et al. 2020, we initialize our model from a pretrained RoBERTa checkpoint. We use the roberta-base model from Huggingface's transformers library, with 12 layers and 12 heads. 4 We finetune on WikiText-103 (Merity et al., 2017) , replacing softmax by \u03b1-entmax with \u03b1 = 1.5 for all heads. Training details, model hyperparameters, and data statistics can be found in \u00a7D.", "cite_spans": [ { "start": 752, "end": 753, "text": "4", "ref_id": null }, { "start": 769, "end": 803, "text": "WikiText-103 (Merity et al., 2017)", "ref_id": null } ], "ref_spans": [ { "start": 51, "end": 59, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Learning projections. As done for MT experiments, we learn to project keys and queries from the original 64 dimensions into r = 4 dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "To this end, we use 1K random samples from the training set, each with length of 512, keeping half for validation. We extract the \u03b1-entmax attention graphs G h but from the encoder self-attention of each head, leading to an average of 3M positive pairs per layer. Due to the small number of learnable parameters for each head (256), training was done with Adam for one epoch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Results. Our full transformer trained with \u03b1entmax achieved a perplexity score of 3.5004 with an overall sparsity of 0.9804 on WikiText-103. As in sentence-level MT experiments, we measure the sparsity-recall and the sparsity-perplexity tradeoff via the change of G h with\u011c h at test time. Moreover, since MLM has longer inputs, we increased the range of the window pattern to {31, 41, 51, 75, 101, 125, 151, 175, 201, 251}. We show in Figure 4 the Pareto curves for the tradeoff between sparsity and recall (left), and the tradeoff between sparsity and perplexity (right). The curves for the sparsity-recall tradeoff are similar to the ones found in MT experiments, with the distance-based method outperforming all methods, followed by the k-means variant of Sparsefinder and Routing Transformer. In terms of perplexity, our distance-based approach also Pareto-dominates other methods, followed by our clustering variant and Routing Transformer. As in the MT experiments, the window baseline yields a similar sparsity-recall curve to other approaches, reinforcing the importance of local patterns. Although the distance-based method requires a quadratic number of computations, it reduces them by a factor of d/r = 64/4 = 16, as described in \u00a74.3, and achieves better recall and perplexity than any other tested method. This finding indicates clear room for improvement in designing efficient attention methods that have a better tradeoff between efficiency and accuracy than existing approaches.", "cite_spans": [ { "start": 377, "end": 424, "text": "{31, 41, 51, 75, 101, 125, 151, 175, 201, 251}.", "ref_id": null } ], "ref_spans": [ { "start": 436, "end": 444, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "Learned patterns. In Figure 5 we show Sparsefinder k-means' predicted attention graphs for a specific attention head that originally learned to focus on coreference tokens. We can see that the pattern induced by Sparsefinder keeps the behavior of attending to coreferences. Concretely, our method achieves a high recall score (\u223c 80%) with a high sparsity rate (\u223c 75%) on this attention head. Cluster analysis. To understand what is represented in each cluster learned by Sparsefinder kmeans, we run the following experiment: we obtain POS tags using spaCy, 5 and calculate the distribution of each tag over clusters for all heads. We show an example in Figure 6 , where Sparsefinder learned a cluster that makes verbs and nouns attend to themselves, and additionally to most auxiliary verbs.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 29, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 653, "end": 661, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Experiments: Machine Translation", "sec_num": "5" }, { "text": "A D V A U X C C O N J D E T I N T J N O U N N U M P A R T P R O N P R O P N P U N C T S C O N J S P A C E S Y M V E R B X 0% 20% 40% 60% 80%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A D J A D P", "sec_num": null }, { "text": "Queries Keys Figure 6 : Percentage of POS tags assigned to a given cluster on the entire Wikitext 103 validation set.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "A D J A D P", "sec_num": null }, { "text": "We proposed Sparsefinder, a method to identify the sparsity pattern of entmax-based transformers while avoiding full computation of the score matrix. Our method learns a low-dimensional projection of queries and keys with a contrastive objective, and comes with three variants: distance, quantization, and clustering-based. We compared these variants against competing approaches on two tasks: machine translation and masked language modeling. We obtained favorable sparsity-recall and sparsityaccuracy tradeoff curves. Our theoretical sparsity provides a lower bound for how much computational sparsity can be achieved, and may guide future research on efficient transformers. Training and Model. We replicated the sentence-level model of Fernandes et al. (2021) with the exception that we used \u03b1-entmax with \u03b1 = 1.5 instead of softmax in all attention heads and layers. Table 3 shows some architecture (transformer large) and training hyperparameters used for MT experiments. We refer to the original work of Fernandes et al. (2021) Table 3 : Hyperparmeters for neural machine translation models.", "cite_spans": [ { "start": 740, "end": 763, "text": "Fernandes et al. (2021)", "ref_id": "BIBREF11" }, { "start": 1011, "end": 1034, "text": "Fernandes et al. (2021)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 872, "end": 879, "text": "Table 3", "ref_id": null }, { "start": 1035, "end": 1042, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Data. Statistics for the subsets of IWSLT used in the projection analysis can be found below in Table 4 . Training. After extracting the \u03b1-entmax graphs, we optimize the learnable parameters of Equation 7 with Adam over a single epoch. Moreover, we used the k-means implementation from scikit-learn (Pedregosa et al., 2011) for our clustering-based approach. The hyperparameters used both for training the projections and for clustering with k-means are shown in Table 5 .", "cite_spans": [ { "start": 299, "end": 323, "text": "(Pedregosa et al., 2011)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 463, "end": 470, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "C.2 Projections setup", "sec_num": null }, { "text": "Projection analysis. We compare Sparsefinder, varying B \u2208 {2, 4, 6, 8, 10, 12} for bucket-based methods, and t \u2208 {0.5, 1.0, 1.5, 2.0, 2.5} for the distance-based variant, with the following methods:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.2 Projections setup", "sec_num": null }, { "text": "\u2022 Window baseline: connect all query and key pairs within a sliding window of size w \u2208 {0, 1, 3, 5, 7, 9, 11, 15, 19, 23, 27}. \u2022 Learnable patterns: Reformer by varying the number of buckets within {2, 4, 6, 8, 10, 12}; Routing transformer by varying the number of clusters within c \u2208 {2, 4, 6, 8, 10} with top-k set to \u2308n/c\u2309 (i.e. balanced clusters).", "cite_spans": [ { "start": 94, "end": 126, "text": "3, 5, 7, 9, 11, 15, 19, 23, 27}.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "C.2 Projections setup", "sec_num": null }, { "text": "\u2022 Fixed patterns: BigBird by varying the number of random blocks within {2, 4, 6, 8, 10} with a block size of 1; Longformer by varying the number of random global tokens within {4, 8, 12, 16, 20}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C.2 Projections setup", "sec_num": null }, { "text": "https://github.com/deep-spin/ sparsefinder", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For the decoder self-attention the denominator in Eq. 5 becomes n(n + 1)/2 due to \"causal\" masking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The difference relies on the dimension on which the topk operation is applied. Routing Transformer applies top-k to the input dimension, possibly leaving some queries unattended, whereas Sparsefinder applies to the centroids dimension, avoiding this problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/roberta-base", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://spacy.io/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Figure 11: Learned patterns by Sparsefinder k-means (left) and the subsequent attention weights (right). Starred blocks represent ground-truth edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the European Research Council (ERC StG DeepSPIN 758969), by the P2020 project MAIA (LISBOA-01-0247-FEDER045909), and by the Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia through project PTDC/CCI-INF/4703/2021 (PRELUNA) and contract UIDB/50008/2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "A natural way to get a sparse attention distribution is by using the sparsemax transformation (Martins and Astudillo, 2016) , which computes an Euclidean projection of the score vector onto the probability simplex \u25b3 n := {p \u2208 R n | p \u2265 0, 1 \u22a4 p = 1}, or, more generally, the \u03b1-entmax transformation (Peters et al., 2019) :\u03b1-entmax(z) := arg maxwhere H \u03b1 is a generalization of the Shannon and Gini entropies proposed by Tsallis (1988) , parametrized by a scalar \u03b1 \u2265 1:Setting \u03b1 = 1 recovers the softmax function, while for any value of \u03b1 > 1 this transformation can return a sparse probability vector. Letting \u03b1 = 2, we recover sparsemax. A popular choice is \u03b1 = 1.5, which has been successfully used in machine translation and morphological inflection applications (Peters et al., 2019; Correia et al., 2019) .Proof to Proposition 1.Proof. From the definition of z| m and from Eq. 2, we have thatWe first prove that \u03c4 (z| m ) = \u03c4 (z). From the definition of \u03c4 (z) we have that1 /\u03b1\u22121 + = 1. Plugging the (in)equalities from Eq. 12, we thus haveSince \u03c4 (z) satisfies the second equation -which is the condition that defines \u03c4 (z| m ) -we thus conclude that \u03c4 (z| m ) = \u03c4 (z). Combining the results in Eqs. 12-13, we see that the supports of \u03b1-entmax(z) and \u03b1-entmax(z| m ) are the same and so are the thresholds \u03c4 , and therefore from Eq. 2 we conclude that \u03b1-entmax(z| m ) = \u03b1-entmax(z).", "cite_spans": [ { "start": 94, "end": 123, "text": "(Martins and Astudillo, 2016)", "ref_id": "BIBREF16" }, { "start": 299, "end": 320, "text": "(Peters et al., 2019)", "ref_id": "BIBREF19" }, { "start": 420, "end": 434, "text": "Tsallis (1988)", "ref_id": "BIBREF27" }, { "start": 766, "end": 787, "text": "(Peters et al., 2019;", "ref_id": "BIBREF19" }, { "start": 788, "end": 809, "text": "Correia et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "A Sparse Attention", "sec_num": null }, { "text": "Our infrastructure consists of 4 machines with the specifications shown in Table 1 . The machines were used interchangeably, and all experiments were executed in a single GPU. Despite having machines with different specifications, we did not observe large differences in the execution time of our models across different machines. Data and model. In order to have a transformer model trained with \u03b1-entmax, we finetuned RoBERTa-Base (Liu et al., 2019) on WikiText-103 (Merity et al., 2017) over 3000 steps with Adam (learning rate of 3 \u00d7 10 \u22125 ). To mimic the finetuning approach adopted by Longformer, we employed a batch size of 2 by accumulating gradients over 32 steps due to GPU memory constraints. Table 6 shows some architecture (transformer large) and training hyperparameters used for MT experiments. We refer to the original work of Liu et al. (2019) ", "cite_spans": [ { "start": 433, "end": 451, "text": "(Liu et al., 2019)", "ref_id": "BIBREF15" }, { "start": 468, "end": 489, "text": "(Merity et al., 2017)", "ref_id": "BIBREF17" }, { "start": 843, "end": 860, "text": "Liu et al. (2019)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 75, "end": 82, "text": "Table 1", "ref_id": null }, { "start": 704, "end": 711, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "B Computing infrastructure", "sec_num": null }, { "text": "Data and training. The subset used for Masked LM projections experiments contains 500 instances for training and 500 instances for validation. Moreover, all instances have a sentence length of 512 tokens. We got 3M (\u00b11M) positive pairs for training and 2.5M (\u00b11M) for validation. The hyperparameters for Masked LM are the same as the ones used in the MT experiments, shown in Table 5 .Projection analysis. We perform the same analysis as in MT, but now we vary the window size of the baseline within {0, 1, 3, 7, 11, 25, 31, 41, 51, 75, 101, 125, 151, 175, 201, 251, 301, 351, 401, 451, 501, 512} .Sparsity-recall tradeoff per layer and head. Plots are shown next in Figure 9 . ", "cite_spans": [ { "start": 500, "end": 596, "text": "{0, 1, 3, 7, 11, 25, 31, 41, 51, 75, 101, 125, 151, 175, 201, 251, 301, 351, 401, 451, 501, 512}", "ref_id": null } ], "ref_spans": [ { "start": 376, "end": 383, "text": "Table 5", "ref_id": null }, { "start": 667, "end": 675, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "D.2 Projections setup", "sec_num": null }, { "text": "Examples of attention maps can be seen in Figure 10 and 11.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 51, "text": "Figure 10", "ref_id": null } ], "eq_spans": [], "section": "E Attention plots", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Metric learning. Synthesis Lectures on Artificial Intelligence and Machine Learning", "authors": [ { "first": "Aur\u00e9lien", "middle": [], "last": "Bellet", "suffix": "" }, { "first": "Amaury", "middle": [], "last": "Habrard", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Sebban", "suffix": "" } ], "year": 2015, "venue": "", "volume": "9", "issue": "", "pages": "1--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aur\u00e9lien Bellet, Amaury Habrard, and Marc Sebban. 2015. Metric learning. Synthesis Lectures on Artifi- cial Intelligence and Machine Learning, 9(1):1-151.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Longformer: The long-document transformer", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.05150" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Language models are few-shot learners", "authors": [ { "first": "Benjamin", "middle": [], "last": "Tom B Brown", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Jared", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "", "middle": [], "last": "Askell", "suffix": "" } ], "year": 2020, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "33", "issue": "", "pages": "1877--1901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Process- ing Systems (NeurIPS), volume 33, pages 1877-1901. Curran Associates, Inc.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Overview of the iwslt 2017 evaluation campaign", "authors": [ { "first": "Mauro", "middle": [], "last": "Cettolo", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Niehues", "middle": [], "last": "Jan", "suffix": "" }, { "first": "St\u00fcker", "middle": [], "last": "Sebastian", "suffix": "" }, { "first": "Sudoh", "middle": [], "last": "Katsuitho", "suffix": "" }, { "first": "Yoshino", "middle": [], "last": "Koichiro", "suffix": "" }, { "first": "Federmann", "middle": [], "last": "Christian", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 14th International Workshop on Spoken Language Translation (IWSLT)", "volume": "", "issue": "", "pages": "2--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Niehues Jan, St\u00fcker Sebastian, Sudoh Katsuitho, Yoshino Koichiro, and Federmann Christian. 2017. Overview of the iwslt 2017 evaluation campaign. In Proceedings of the 14th International Workshop on Spoken Language Translation (IWSLT), pages 2-14.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Generating long sequences with sparse transformers", "authors": [ { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Gray", "suffix": "" }, { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.10509" ] }, "num": null, "urls": [], "raw_text": "Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long se- quences with sparse transformers. arXiv preprint arXiv:1904.10509.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Rethinking attention with performers", "authors": [ { "first": "Valerii", "middle": [], "last": "Krzysztof Marcin Choromanski", "suffix": "" }, { "first": "David", "middle": [], "last": "Likhosherstov", "suffix": "" }, { "first": "Xingyou", "middle": [], "last": "Dohan", "suffix": "" }, { "first": "Andreea", "middle": [], "last": "Song", "suffix": "" }, { "first": "Tamas", "middle": [], "last": "Gane", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Sarlos", "suffix": "" }, { "first": "Jared", "middle": [ "Quincy" ], "last": "Hawkins", "suffix": "" }, { "first": "Afroz", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Mohiuddin", "suffix": "" }, { "first": "David", "middle": [ "Benjamin" ], "last": "Kaiser", "suffix": "" }, { "first": "Lucy", "middle": [ "J" ], "last": "Belanger", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Colwell", "suffix": "" }, { "first": "", "middle": [], "last": "Weller", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Quincy Davis, Afroz Mohiuddin, Lukasz Kaiser, David Benjamin Be- langer, Lucy J Colwell, and Adrian Weller. 2021. Re- thinking attention with performers. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Adaptively sparse transformers", "authors": [ { "first": "M", "middle": [], "last": "Gon\u00e7alo", "suffix": "" }, { "first": "Vlad", "middle": [], "last": "Correia", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Niculae", "suffix": "" }, { "first": "", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2174--2184", "other_ids": { "DOI": [ "10.18653/v1/D19-1223" ] }, "num": null, "urls": [], "raw_text": "Gon\u00e7alo M. Correia, Vlad Niculae, and Andr\u00e9 F. T. Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2174- 2184, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Smyrf -efficient attention using asymmetric clustering", "authors": [ { "first": "Giannis", "middle": [], "last": "Daras", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Augustus", "middle": [], "last": "Odena", "suffix": "" }, { "first": "Alexandros G", "middle": [], "last": "Dimakis", "suffix": "" } ], "year": 2020, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "6476--6489", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giannis Daras, Nikita Kitaev, Augustus Odena, and Alexandros G Dimakis. 2020. Smyrf -efficient at- tention using asymmetric clustering. In Advances in Neural Information Processing Systems, volume 33, pages 6476-6489. Curran Associates, Inc.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Constrained clustering with minkowski weighted k-means", "authors": [ { "first": "Renato", "middle": [], "last": "Cordeiro De Amorim", "suffix": "" } ], "year": 2012, "venue": "2012 IEEE 13th International Symposium on Computational Intelligence and Informatics (CINTI)", "volume": "", "issue": "", "pages": "13--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Renato Cordeiro de Amorim. 2012. Constrained clus- tering with minkowski weighted k-means. In 2012 IEEE 13th International Symposium on Computa- tional Intelligence and Informatics (CINTI), pages 13-17. IEEE.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ParaCrawl: Web-scale parallel corpora for the languages of the EU", "authors": [ { "first": "Miquel", "middle": [], "last": "Espl\u00e0", "suffix": "" }, { "first": "Mikel", "middle": [], "last": "Forcada", "suffix": "" }, { "first": "Gema", "middle": [], "last": "Ram\u00edrez-S\u00e1nchez", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of Machine Translation Summit XVII", "volume": "2", "issue": "", "pages": "118--119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miquel Espl\u00e0, Mikel Forcada, Gema Ram\u00edrez-S\u00e1nchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale paral- lel corpora for the languages of the EU. In Proceed- ings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 118-119, Dublin, Ireland. European Association for Machine Translation.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Measuring and increasing context usage in context-aware machine translation", "authors": [ { "first": "Patrick", "middle": [], "last": "Fernandes", "suffix": "" }, { "first": "Kayo", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2021, "venue": "Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Fernandes, Kayo Yin, Graham Neubig, and An- dr\u00e9 F. T. Martins. 2021. Measuring and increasing context usage in context-aware machine translation. In Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP), Virtual.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "authors": [ { "first": "A", "middle": [], "last": "Katharopoulos", "suffix": "" }, { "first": "A", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "N", "middle": [], "last": "Pappas", "suffix": "" }, { "first": "F", "middle": [], "last": "Fleuret", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret. 2020. Transformers are rnns: Fast autoregressive transformers with linear attention. In Proceedings of the International Conference on Machine Learning (ICML).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multilingual constituency parsing with self-attention and pre-training", "authors": [ { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3499--3505", "other_ids": { "DOI": [ "10.18653/v1/P19-1340" ] }, "num": null, "urls": [], "raw_text": "Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multi- lingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3499-3505, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Reformer: The efficient transformer", "authors": [ { "first": "Nikita", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Anselm", "middle": [], "last": "Levskaya", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In In- ternational Conference on Learning Representations (ICLR).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", "authors": [ { "first": "Andre", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Ramon", "middle": [], "last": "Astudillo", "suffix": "" } ], "year": 2016, "venue": "International Conference on Machine Learning (ICML)", "volume": "48", "issue": "", "pages": "1614--1623", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andre Martins and Ramon Astudillo. 2016. From soft- max to sparsemax: A sparse model of attention and multi-label classification. In International Confer- ence on Machine Learning (ICML), volume 48 of Proceedings of Machine Learning Research, pages 1614-1623, New York, New York, USA. PMLR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Pointer sentinel mixture models", "authors": [ { "first": "Stephen", "middle": [], "last": "Merity", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture mod- els. In 5th International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Scikit-learn: Machine learning in Python", "authors": [ { "first": "F", "middle": [], "last": "Pedregosa", "suffix": "" }, { "first": "G", "middle": [], "last": "Varoquaux", "suffix": "" }, { "first": "A", "middle": [], "last": "Gramfort", "suffix": "" }, { "first": "V", "middle": [], "last": "Michel", "suffix": "" }, { "first": "B", "middle": [], "last": "Thirion", "suffix": "" }, { "first": "O", "middle": [], "last": "Grisel", "suffix": "" }, { "first": "M", "middle": [], "last": "Blondel", "suffix": "" }, { "first": "P", "middle": [], "last": "Prettenhofer", "suffix": "" }, { "first": "R", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "V", "middle": [], "last": "Dubourg", "suffix": "" }, { "first": "J", "middle": [], "last": "Vanderplas", "suffix": "" }, { "first": "A", "middle": [], "last": "Passos", "suffix": "" }, { "first": "D", "middle": [], "last": "Cournapeau", "suffix": "" }, { "first": "M", "middle": [], "last": "Brucher", "suffix": "" }, { "first": "M", "middle": [], "last": "Perrot", "suffix": "" }, { "first": "E", "middle": [], "last": "Duchesnay", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research (JMLR)", "volume": "12", "issue": "", "pages": "2825--2830", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research (JMLR), 12:2825-2830.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sparse sequence-to-sequence models", "authors": [ { "first": "Ben", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Vlad", "middle": [], "last": "Niculae", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Martins", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1504--1519", "other_ids": { "DOI": [ "10.18653/v1/P19-1146" ] }, "num": null, "urls": [], "raw_text": "Ben Peters, Vlad Niculae, and Andr\u00e9 F. T. Martins. 2019. Sparse sequence-to-sequence models. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1504-1519, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": { "DOI": [ "10.18653/v1/W18-6319" ] }, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Fixed encoder self-attention patterns in transformer-based machine translation", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Scherrer", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "556--568", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.49" ] }, "num": null, "urls": [], "raw_text": "Alessandro Raganato, Yves Scherrer, and J\u00f6rg Tiede- mann. 2020. Fixed encoder self-attention patterns in transformer-based machine translation. In Find- ings of the Association for Computational Linguistics: EMNLP 2020, pages 556-568, Online. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "An analysis of encoder representations in transformerbased machine translation", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "287--297", "other_ids": { "DOI": [ "10.18653/v1/W18-5431" ] }, "num": null, "urls": [], "raw_text": "Alessandro Raganato and J\u00f6rg Tiedemann. 2018. An analysis of encoder representations in transformer- based machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287-297, Brussels, Belgium. Association for Com- putational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Efficient content-based sparse attention with routing transformers", "authors": [ { "first": "Aurko", "middle": [], "last": "Roy", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saffar", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" } ], "year": 2021, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "9", "issue": "", "pages": "53--68", "other_ids": { "DOI": [ "10.1162/tacl_a_00353" ] }, "num": null, "urls": [], "raw_text": "Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics (TACL), 9:53-68.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Lin- guistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Do long-range language models actually use long-range context?", "authors": [ { "first": "Simeng", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Kalpesh", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mattarella-Micke", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "807--822", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simeng Sun, Kalpesh Krishna, Andrew Mattarella- Micke, and Mohit Iyyer. 2021. Do long-range lan- guage models actually use long-range context? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 807- 822, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Sparse sinkhorn attention", "authors": [ { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Dara", "middle": [], "last": "Bahri", "suffix": "" }, { "first": "Liu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Donald", "middle": [], "last": "Metzler", "suffix": "" }, { "first": "Da-Cheng", "middle": [], "last": "Juan", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "9438--9447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, and Da-Cheng Juan. 2020. Sparse sinkhorn attention. In International Conference on Machine Learning (ICML), pages 9438-9447. PMLR.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Possible generalization of boltzmann-gibbs statistics", "authors": [ { "first": "Constantino", "middle": [], "last": "Tsallis", "suffix": "" } ], "year": 1988, "venue": "Journal of Statistical Physics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Constantino Tsallis. 1988. Possible generalization of boltzmann-gibbs statistics. Journal of Statistical Physics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems (NeurIPS), volume 30, pages 5998- 6008. Curran Associates, Inc.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "David", "middle": [], "last": "Talbot", "suffix": "" }, { "first": "Fedor", "middle": [], "last": "Moiseev", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5797--5808", "other_ids": { "DOI": [ "10.18653/v1/P19-1580" ] }, "num": null, "urls": [], "raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 5797-5808, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Fast transformers with clustered attention", "authors": [ { "first": "A", "middle": [], "last": "Vyas", "suffix": "" }, { "first": "A", "middle": [], "last": "Katharopoulos", "suffix": "" }, { "first": "F", "middle": [], "last": "Fleuret", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Vyas, A. Katharopoulos, and F. Fleuret. 2020. Fast transformers with clustered attention. In Proceedings of the International Conference on Neural Informa- tion Processing Systems (NeurIPS).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Constrained k-means clustering with background knowledge", "authors": [ { "first": "Kiri", "middle": [], "last": "Wagstaff", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schr\u00f6dl", "suffix": "" } ], "year": 2001, "venue": "International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "577--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kiri Wagstaff, Claire Cardie, Seth Rogers, and Stefan Schr\u00f6dl. 2001. Constrained k-means clustering with background knowledge. In International Conference on Machine Learning (ICML), page 577-584.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Cluster-former: Clustering-based sparse transformer for question answering", "authors": [ { "first": "Shuohang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Luowei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yuwei", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", "volume": "", "issue": "", "pages": "3958--3968", "other_ids": { "DOI": [ "10.18653/v1/2021.findings-acl.346" ] }, "num": null, "urls": [], "raw_text": "Shuohang Wang, Luowei Zhou, Zhe Gan, Yen-Chun Chen, Yuwei Fang, Siqi Sun, Yu Cheng, and Jingjing Liu. 2021. Cluster-former: Clustering-based sparse transformer for question answering. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 3958-3968, Online. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Linformer: Self-attention with linear complexity", "authors": [ { "first": "Sinong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Belinda", "middle": [], "last": "Li", "suffix": "" }, { "first": "Madian", "middle": [], "last": "Khabsa", "suffix": "" }, { "first": "Han", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.04768" ] }, "num": null, "urls": [], "raw_text": "Sinong Wang, Belinda Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Distance metric learning for large margin nearest neighbor classification", "authors": [ { "first": "Q", "middle": [], "last": "Kilian", "suffix": "" }, { "first": "Lawrence K", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "", "middle": [], "last": "Saul", "suffix": "" } ], "year": 2009, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kilian Q Weinberger and Lawrence K Saul. 2009. Dis- tance metric learning for large margin nearest neigh- bor classification. Journal of Machine Learning Re- search (JMLR), 10(2).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Distance metric learning with application to clustering with side-information", "authors": [ { "first": "P", "middle": [], "last": "Eric", "suffix": "" }, { "first": "", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Y", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Ng", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Jordan", "suffix": "" }, { "first": "", "middle": [], "last": "Russell", "suffix": "" } ], "year": 2002, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "15", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric P Xing, Andrew Y Ng, Michael I Jordan, and Stuart Russell. 2002. Distance metric learning with applica- tion to clustering with side-information. In Advances in Neural Information Processing Systems (NeurIPS), volume 15, page 12.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Big bird: Transformers for longer sequences", "authors": [ { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Guru", "middle": [], "last": "Guruganesh", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kumar Avinava Dubey", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Ainslie", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Ontanon", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Qifan", "middle": [], "last": "Ravula", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago On- tanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. Advances in Neural Information Processing Systems (NeurIPS), 33.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Sparse attention with linear units", "authors": [ { "first": "Biao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.07012" ] }, "num": null, "urls": [], "raw_text": "Biao Zhang, Ivan Titov, and Rico Sennrich. 2021. Sparse attention with linear units. arXiv preprint arXiv:2104.07012.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Explicit sparse transformer: Concentrated attention through explicit selection", "authors": [ { "first": "Guangxiang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Junyang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuancheng", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.11637" ] }, "num": null, "urls": [], "raw_text": "Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xu- ancheng Ren, Qi Su, and Xu Sun. 2019. Explicit sparse transformer: Concentrated attention through explicit selection. arXiv preprint arXiv:1912.11637.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "\u03b1-entmax graph b) Project and group q i and k j c) Add local + global patterns", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "(a) Extract sparse attention graphs from a pretrained \u03b1-entmax transformer;", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "Sparsity-recall (left) and sparsity-BLEU (right) tradeoff averaged across all layers and heads on IWSLT EN\u2192DE (top) and EN\u2192FR (bottom). The vertical dashed line represents the gold sparsity obtained by the original \u03b1-entmax transformer (which requires quadratic computation), and the starred marks depict its BLEU score: 34.47 on EN\u2192DE and 42.65 on EN\u2192FR.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Learned patterns by Sparsefinder k-means (left) and the subsequent attention weights (right). Starred blocks represent ground-truth edges.", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "Sparsity-recall and sparsity-(neg-)perplexity tradeoff averaged across all layers and heads on WikiText-103. The vertical dashed line represents the gold sparsity obtained by the full \u03b1-entmax transformer.", "type_str": "figure", "uris": null, "num": null }, "FIGREF5": { "text": "Attention pattern learned by Sparsefinder kmeans that focus on coreference tokens.", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "num": null, "text": "Statistics for MT datasets.", "html": null, "content": "" }, "TABREF2": { "type_str": "table", "num": null, "text": "for more training details.", "html": null, "content": "
HYPERPARAM.VALUE
Hidden size Feedforward size Number of layers Number of heads Attention mapping \u03c0 Optimizer Number of epochs Early stopping patience 10 1024 4096 6 16 1.5-entmax Adam 20 Learning rate 0.0005 Scheduling Inverse square root Linear warm-up steps 4000 Dropout 0.3 CoWord dropout 0.1 Beam size 5
" }, "TABREF3": { "type_str": "table", "num": null, "text": "TRAIN VALIDATION PAIR # SENT. # POS. PAIRS AVG. SENT. LENGTH # SENT. # POS. PAIRS AVG. SENT. LENGTH", "html": null, "content": "
EN\u2192DE EN\u2192FR9K 9K8M \u00b11M 9M \u00b11M35 \u00b116 37 \u00b1171K 1K330K \u00b156K 334K \u00b158K36 \u00b117 37 \u00b116
" }, "TABREF4": { "type_str": "table", "num": null, "text": "Statistics for subsets of IWSLT used for training and evaluating projections.", "html": null, "content": "" } } } }