{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:57:22.693410Z" }, "title": "A Primer in BERTology: What We Know About How BERT Works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Copenhagen", "location": {} }, "email": "arogers@sodas.ku.dk" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Lowell", "location": {} }, "email": "okovalev@cs.uml.edu" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts Lowell", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression. We then outline directions for future research.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression. We then outline directions for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Since their introduction in 2017, Transformers (Vaswani et al., 2017) have taken NLP by storm, offering enhanced parallelization and better modeling of long-range dependencies. The best known Transformer-based model is BERT (Devlin et al., 2019) ; it obtained state-of-the-art results in numerous benchmarks and is still a must-have baseline.", "cite_spans": [ { "start": 47, "end": 69, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF146" }, { "start": 224, "end": 245, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although it is clear that BERT works remarkably well, it is less clear why, which limits further hypothesis-driven improvement of the architecture. Unlike CNNs, the Transformers have little cognitive motivation, and the size of these models limits our ability to experiment with pre-training and perform ablation studies. This explains a large number of studies over the past year that attempted to understand the reasons behind BERT's performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we provide an overview of what has been learned to date, highlighting the questions that are still unresolved. We first consider the linguistic aspects of it, namely, the current evidence regarding the types of linguistic and world knowledge learned by BERT, as well as where and how this knowledge may be stored in the model. We then turn to the technical aspects of the model and provide an overview of the current proposals to improve BERT's architecture, pre-training, and fine-tuning. We conclude by discussing the issue of overparameterization, the approaches to compressing BERT, and the nascent area of pruning as a model analysis technique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fundamentally, BERT is a stack of Transformer encoder layers (Vaswani et al., 2017) that consist of multiple self-attention ''heads''. For every input token in a sequence, each head computes key, value, and query vectors, used to create a weighted representation. The outputs of all heads in the same layer are combined and run through a fully connected layer. Each layer is wrapped with a skip connection and followed by layer normalization.", "cite_spans": [ { "start": 61, "end": 83, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF146" } ], "ref_spans": [], "eq_spans": [], "section": "Overview of BERT Architecture", "sec_num": "2" }, { "text": "The conventional workflow for BERT consists of two stages: pre-training and fine-tuning. Pretraining uses two self-supervised tasks: masked language modeling (MLM, prediction of randomly masked input tokens) and next sentence prediction (NSP, predicting if two input sentences are adjacent to each other). In fine-tuning for downstream applications, one or more fully connected layers are typically added on top of the final encoder layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of BERT Architecture", "sec_num": "2" }, { "text": "The input representations are computed as follows: Each word in the input is first tokenized into wordpieces (Wu et al., 2016) , and then three embedding layers (token, position, and segment) are combined to obtain a fixed-length vector. Special token [CLS] is used for classification predictions, and [SEP] separates input segments.", "cite_spans": [ { "start": 109, "end": 126, "text": "(Wu et al., 2016)", "ref_id": "BIBREF169" }, { "start": 252, "end": 257, "text": "[CLS]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Overview of BERT Architecture", "sec_num": "2" }, { "text": "Google 1 and HuggingFace (Wolf et al., 2020 ) provide many variants of BERT, including the original ''base'' and ''large'' versions. They vary in the number of heads, layers, and hidden state size.", "cite_spans": [ { "start": 25, "end": 43, "text": "(Wolf et al., 2020", "ref_id": "BIBREF117" } ], "ref_spans": [], "eq_spans": [], "section": "Overview of BERT Architecture", "sec_num": "2" }, { "text": "A number of studies have looked at the knowledge encoded in BERT weights. The popular approaches include fill-in-the-gap probes of MLM, analysis of self-attention weights, and probing classifiers with different BERT representations as inputs. showed that BERT representations are hierarchical rather than linear, that is, there is something akin to syntactic tree structure in addition to the word order information. Tenney et al. (2019b) and also showed that BERT embeddings encode information about parts of speech, syntactic chunks, and roles. Enough syntactic information seems to be captured in the token embeddings themselves to recover syntactic trees (Vilares et al., 2020; Kim et al., 2020; Rosa and Mare\u010dek, 2019) , although probing classifiers could not recover the labels of distant parent nodes in the syntactic tree . Warstadt and Bowman (2020) report evidence of hierarchical structure in three out of four probing tasks.", "cite_spans": [ { "start": 417, "end": 438, "text": "Tenney et al. (2019b)", "ref_id": "BIBREF140" }, { "start": 659, "end": 681, "text": "(Vilares et al., 2020;", "ref_id": "BIBREF149" }, { "start": 682, "end": 699, "text": "Kim et al., 2020;", "ref_id": "BIBREF65" }, { "start": 700, "end": 723, "text": "Rosa and Mare\u010dek, 2019)", "ref_id": "BIBREF115" }, { "start": 832, "end": 858, "text": "Warstadt and Bowman (2020)", "ref_id": "BIBREF162" } ], "ref_spans": [], "eq_spans": [], "section": "What Knowledge Does BERT Have?", "sec_num": "3" }, { "text": "As far as how syntax is represented, it seems that syntactic structure is not directly encoded in self-attention weights. Htut et al. (2019) were unable to extract full parse trees from BERT heads even with the gold annotations for the root. Jawahar et al. (2019) include a brief illustration of a dependency tree extracted directly from selfattention weights, but provide no quantitative evaluation.", "cite_spans": [ { "start": 242, "end": 263, "text": "Jawahar et al. (2019)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Knowledge", "sec_num": "3.1" }, { "text": "However, syntactic information can be recovered from BERT token representations. Hewitt and Manning (2019) were able to learn transformation matrices that successfully recovered syntactic dependencies in PennTreebank data from BERT's token embeddings (see also . Jawahar et al. (2019) experimented with transformations of the [CLS] token using Tensor Product Decomposition Networks (McCoy et al., 2019a) , concluding that dependency trees are the best match among five decomposition schemes (although the reported MSE differences are very small). Miaschi and Dell'Orletta (2020) perform a range of syntactic probing experiments with concatenated token representations as input.", "cite_spans": [ { "start": 81, "end": 106, "text": "Hewitt and Manning (2019)", "ref_id": "BIBREF52" }, { "start": 263, "end": 284, "text": "Jawahar et al. (2019)", "ref_id": "BIBREF58" }, { "start": 382, "end": 403, "text": "(McCoy et al., 2019a)", "ref_id": "BIBREF87" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Knowledge", "sec_num": "3.1" }, { "text": "Note that all these approaches look for the evidence of gold-standard linguistic structures, Figure 1 : Parameter-free probe for syntactic knowledge: words sharing syntactic subtrees have larger impact on each other in the MLM prediction .", "cite_spans": [], "ref_spans": [ { "start": 93, "end": 101, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Syntactic Knowledge", "sec_num": "3.1" }, { "text": "and add some amount of extra knowledge to the probe. Most recently, proposed a parameter-free approach based on measuring the impact that one word has on predicting another word within a sequence in the MLM task ( Figure 1 ). They concluded that BERT ''naturally'' learns some syntactic information, although it is not very similar to linguistic annotated resources.", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 222, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Syntactic Knowledge", "sec_num": "3.1" }, { "text": "The fill-in-the-gap probes of MLM showed that BERT takes subject-predicate agreement into account when performing the cloze task (Goldberg, 2019; van Schijndel et al., 2019) , even for meaningless sentences and sentences with distractor clauses between the subject and the verb (Goldberg, 2019) . A study of negative polarity items (NPIs) by Warstadt et al. (2019) showed that BERT is better able to detect the presence of NPIs (e.g., ''ever'') and the words that allow their use (e.g., ''whether'') than scope violations.", "cite_spans": [ { "start": 129, "end": 145, "text": "(Goldberg, 2019;", "ref_id": "BIBREF45" }, { "start": 146, "end": 173, "text": "van Schijndel et al., 2019)", "ref_id": "BIBREF145" }, { "start": 278, "end": 294, "text": "(Goldberg, 2019)", "ref_id": "BIBREF45" }, { "start": 342, "end": 364, "text": "Warstadt et al. (2019)", "ref_id": "BIBREF163" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Knowledge", "sec_num": "3.1" }, { "text": "The above claims of syntactic knowledge are belied by the evidence that BERT does not ''understand'' negation and is insensitive to malformed input. In particular, its predictions were not altered 2 even with shuffled word order, truncated sentences, removed subjects and objects (Ettinger, 2019) . This could mean that either BERT's syntactic knowledge is incomplete, or it does not need to rely on it for solving its tasks. The latter seems more likely, since Glava\u0161 and Vuli\u0107 (2020) report that an intermediate fine-tuning step with supervised parsing does not make much difference for downstream task performance.", "cite_spans": [ { "start": 280, "end": 296, "text": "(Ettinger, 2019)", "ref_id": "BIBREF35" }, { "start": 462, "end": 485, "text": "Glava\u0161 and Vuli\u0107 (2020)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Knowledge", "sec_num": "3.1" }, { "text": "To date, more studies have been devoted to BERT's knowledge of syntactic rather than semantic phenomena. However, we do have evidence from an MLM probing study that BERT has some knowledge of semantic roles (Ettinger, 2019) . BERT even displays some preference for the incorrect fillers for semantic roles that are semantically related to the correct ones, as opposed to those that are unrelated (e.g., ''to tip a chef'' is better than ''to tip a robin'', but worse than ''to tip a waiter''). Tenney et al. (2019b) showed that BERT encodes information about entity types, relations, semantic roles, and proto-roles, since this information can be detected with probing classifiers.", "cite_spans": [ { "start": 207, "end": 223, "text": "(Ettinger, 2019)", "ref_id": "BIBREF35" }, { "start": 493, "end": 514, "text": "Tenney et al. (2019b)", "ref_id": "BIBREF140" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Knowledge", "sec_num": "3.2" }, { "text": "BERT struggles with representations of numbers. Addition and number decoding tasks showed that BERT does not form good representations for floating point numbers and fails to generalize away from the training data (Wallace et al., 2019b) . A part of the problem is BERT's wordpiece tokenization, since numbers of similar values can be divided up into substantially different word chunks.", "cite_spans": [ { "start": 214, "end": 237, "text": "(Wallace et al., 2019b)", "ref_id": "BIBREF154" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Knowledge", "sec_num": "3.2" }, { "text": "Out-of-the-box BERT is surprisingly brittle to named entity replacements: For example, replacing names in the coreference task changes 85% of predictions (Balasubramanian et al., 2020) . This suggests that the model does not actually form a generic idea of named entities, although its F1 scores on NER probing tasks are high (Tenney et al., 2019a) . Broscheit (2019) finds that fine-tuning BERT on Wikipedia entity linking ''teaches'' it additional entity knowledge, which would suggest that it did not absorb all the relevant entity information during pre-training on Wikipedia.", "cite_spans": [ { "start": 154, "end": 184, "text": "(Balasubramanian et al., 2020)", "ref_id": "BIBREF9" }, { "start": 326, "end": 348, "text": "(Tenney et al., 2019a)", "ref_id": "BIBREF139" }, { "start": 351, "end": 367, "text": "Broscheit (2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Knowledge", "sec_num": "3.2" }, { "text": "are not well-formed from the point of view of a human reader (Wallace et al., 2019a) . (Petroni et al., 2019) .", "cite_spans": [ { "start": 61, "end": 84, "text": "(Wallace et al., 2019a)", "ref_id": "BIBREF153" }, { "start": 87, "end": 109, "text": "(Petroni et al., 2019)", "ref_id": "BIBREF99" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Knowledge", "sec_num": "3.2" }, { "text": "The bulk of evidence about commonsense knowledge captured in BERT comes from practitioners using it to extract such knowledge. One direct probing study of BERT reports that BERT struggles with pragmatic inference and role-based event knowledge (Ettinger, 2019) . BERT also struggles with abstract attributes of objects, as well as visual and perceptual properties that are likely to be assumed rather than mentioned (Da and Kasai, 2019) .", "cite_spans": [ { "start": 244, "end": 260, "text": "(Ettinger, 2019)", "ref_id": "BIBREF35" }, { "start": 416, "end": 436, "text": "(Da and Kasai, 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "World Knowledge", "sec_num": "3.3" }, { "text": "The MLM component of BERT is easy to adapt for knowledge induction by filling in the blanks (e.g., ''Cats like to chase [ ]''). Petroni et al. (2019) showed that, for some relation types, vanilla BERT is competitive with methods relying on knowledge bases (Figure 2 ), and Roberts et al. (2020) show the same for open-domain QA using the T5 model (Raffel et al., 2019) . Davison et al. (2019) suggest that it generalizes better to unseen data. In order to retrieve BERT's knowledge, we need good template sentences, and there is work on their automatic extraction and augmentation (Bouraoui et al., 2019; Jiang et al., 2019b) .", "cite_spans": [ { "start": 128, "end": 149, "text": "Petroni et al. (2019)", "ref_id": "BIBREF99" }, { "start": 273, "end": 294, "text": "Roberts et al. (2020)", "ref_id": null }, { "start": 347, "end": 368, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF106" }, { "start": 371, "end": 392, "text": "Davison et al. (2019)", "ref_id": "BIBREF30" }, { "start": 581, "end": 604, "text": "(Bouraoui et al., 2019;", "ref_id": "BIBREF14" }, { "start": 605, "end": 625, "text": "Jiang et al., 2019b)", "ref_id": "BIBREF60" } ], "ref_spans": [ { "start": 256, "end": 265, "text": "(Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "World Knowledge", "sec_num": "3.3" }, { "text": "However, BERT cannot reason based on its world knowledge. Forbes et al. (2019) show that BERT can ''guess'' the affordances and properties of many objects, but cannot reason about the relationship between properties and affordances. For example, it ''knows'' that people can walk into houses, and that houses are big, but it cannot infer that houses are bigger than people. and Richardson and Sabharwal (2019) also show that the performance drops with the number of necessary inference steps. Some of BERT's world knowledge success comes from learning stereotypical associations (Poerner et al., 2019) , for example, a person with an Italian-sounding name is predicted to be Italian, even when it is incorrect.", "cite_spans": [ { "start": 58, "end": 78, "text": "Forbes et al. (2019)", "ref_id": "BIBREF37" }, { "start": 378, "end": 409, "text": "Richardson and Sabharwal (2019)", "ref_id": "BIBREF112" }, { "start": 579, "end": 601, "text": "(Poerner et al., 2019)", "ref_id": "BIBREF102" } ], "ref_spans": [], "eq_spans": [], "section": "World Knowledge", "sec_num": "3.3" }, { "text": "Multiple probing studies in section 3 and section 4 report that BERT possesses a surprising amount of syntactic, semantic, and world knowledge. However, Tenney et al. (2019a) remark, ''the fact that a linguistic pattern is not observed by our probing classifier does not guarantee that it is not there, and the observation of a pattern does not tell us how it is used.'' There is also the issue of how complex a probe should be allowed to be . If a more complex probe recovers more information, to what extent are we still relying on the original model?", "cite_spans": [ { "start": 153, "end": 174, "text": "Tenney et al. (2019a)", "ref_id": "BIBREF139" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "3.4" }, { "text": "Furthermore, different probing methods may lead to complementary or even contradictory conclusions, which makes a single test (as in most studies) insufficient (Warstadt et al., 2019) . A given method might also favor one model over another, for example, RoBERTa trails BERT with one tree extraction method, but leads with another (Htut et al., 2019) . The choice of linguistic formalism also matters (Kuznetsov and Gurevych, 2020) .", "cite_spans": [ { "start": 160, "end": 183, "text": "(Warstadt et al., 2019)", "ref_id": "BIBREF163" }, { "start": 331, "end": 350, "text": "(Htut et al., 2019)", "ref_id": "BIBREF56" }, { "start": 401, "end": 431, "text": "(Kuznetsov and Gurevych, 2020)", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "3.4" }, { "text": "In view of all that, the alternative is to focus on identifying what BERT actually relies on at inference time. This direction is currently pursued both at the level of architecture blocks (to be discussed in detail in subsection 6.3), and at the level of information encoded in model weights. Amnesic probing (Elazar et al., 2020) aims to specifically remove certain information from the model and see how it changes performance, finding, for example, that language modeling does rely on part-of-speech information.", "cite_spans": [ { "start": 310, "end": 331, "text": "(Elazar et al., 2020)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "3.4" }, { "text": "Another direction is information-theoretic probing. Pimentel et al. (2020) operationalize probing as estimating mutual information between the learned representation and a given linguistic property, which highlights that the focus should be not on the amount of information contained in a representation, but rather on how easily it can be extracted from it. Voita and Titov (2020) quantify the amount of effort needed to extract information from a given representation as minimum description length needed to communicate both the probe size and the amount of data required for it to do well on a task.", "cite_spans": [ { "start": 52, "end": 74, "text": "Pimentel et al. (2020)", "ref_id": "BIBREF101" } ], "ref_spans": [], "eq_spans": [], "section": "Limitations", "sec_num": "3.4" }, { "text": "In studies of BERT, the term ''embedding'' refers to the output of a Transformer layer (typically, the final one). Both conventional static embeddings (Mikolov et al., 2013) and BERT-style embeddings can be viewed in terms of mutual information maximization (Kong et al., 2019) , but the latter are contextualized. Every token is represented by a vector dependent on the particular context of occurrence, and contains at least some information about that context (Miaschi and Dell'Orletta, 2020) .", "cite_spans": [ { "start": 151, "end": 173, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF93" }, { "start": 258, "end": 277, "text": "(Kong et al., 2019)", "ref_id": "BIBREF68" }, { "start": 463, "end": 495, "text": "(Miaschi and Dell'Orletta, 2020)", "ref_id": "BIBREF89" } ], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "4.1" }, { "text": "Several studies reported that distilled contextualized embeddings better encode lexical semantic information (i.e., they are better at traditional word-level tasks such as word similarity). The methods to distill a contextualized representation into static include aggregating the information across multiple contexts (Akbik et al., 2019; Bommasani et al., 2020) , encoding ''semantically bleached'' sentences that rely almost exclusively on the meaning of a given word (e.g., \"This is <>\") (May et al., 2019) , and even using contextualized embeddings to train static embeddings (Wang et al., 2020d) .", "cite_spans": [ { "start": 318, "end": 338, "text": "(Akbik et al., 2019;", "ref_id": "BIBREF1" }, { "start": 339, "end": 362, "text": "Bommasani et al., 2020)", "ref_id": "BIBREF13" }, { "start": 491, "end": 509, "text": "(May et al., 2019)", "ref_id": "BIBREF85" }, { "start": 580, "end": 600, "text": "(Wang et al., 2020d)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "4.1" }, { "text": "But this is not to say that there is no room for improvement. Ethayarajh (2019) measure how similar the embeddings for identical words are in every layer, reporting that later BERT layers produce more context-specific representations. 3 They also find that BERT embeddings occupy a narrow cone in the vector space, and this effect increases from the earlier to later layers. That is, two random words will on average have a much higher cosine similarity than expected if embeddings were directionally uniform (isotropic). Because isotropy was shown to be beneficial for static word embeddings (Mu and Viswanath, 2018) , this might be a fruitful direction to explore for BERT.", "cite_spans": [ { "start": 235, "end": 236, "text": "3", "ref_id": null }, { "start": 593, "end": 617, "text": "(Mu and Viswanath, 2018)", "ref_id": "BIBREF94" } ], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "4.1" }, { "text": "Because BERT embeddings are contextualized, an interesting question is to what extent they capture phenomena like polysemy and homonymy. There is indeed evidence that BERT's contextualized embeddings form distinct clusters corresponding to word senses (Wiedemann et al., 2019; Schmidt and Hofmann, 2020) , making BERT successful at word sense disambiguation task. However, Mickus et al. (2019) note that the representations of the same word depend on the position of the sentence in which it occurs, likely due to the NSP objective. This is not desirable from the linguistic point of view, and could be a promising avenue for future work.", "cite_spans": [ { "start": 252, "end": 276, "text": "(Wiedemann et al., 2019;", "ref_id": "BIBREF164" }, { "start": 277, "end": 303, "text": "Schmidt and Hofmann, 2020)", "ref_id": "BIBREF119" }, { "start": 373, "end": 393, "text": "Mickus et al. (2019)", "ref_id": "BIBREF91" } ], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "4.1" }, { "text": "The above discussion concerns token embeddings, but BERT is typically used as a sentence or text encoder. The standard way to generate sentence or text representations for classification is to use the [CLS] token, but alternatives are also being discussed, including concatenation of token representations (Tanaka et al., 2020) , normalized mean (Tanaka et al., 2020) , and layer activations (Ma et al., 2019) . See Toshniwal et al. 2020for a systematic comparison of several methods across tasks and sentence encoders.", "cite_spans": [ { "start": 306, "end": 327, "text": "(Tanaka et al., 2020)", "ref_id": "BIBREF137" }, { "start": 346, "end": 367, "text": "(Tanaka et al., 2020)", "ref_id": "BIBREF137" }, { "start": 392, "end": 409, "text": "(Ma et al., 2019)", "ref_id": "BIBREF83" } ], "ref_spans": [], "eq_spans": [], "section": "BERT Embeddings", "sec_num": "4.1" }, { "text": "Several studies proposed classification of attention head types. Raganato and Tiedemann (2018) discuss attending to the token itself, previous/next tokens, and the sentence end. Clark et al. (2019) distinguish between attending to previous/next tokens, [CLS], [SEP] , punctuation, and ''attending broadly'' over the sequence. Kovaleva et al. (2019) propose five patterns, shown in Figure 3 .", "cite_spans": [ { "start": 178, "end": 197, "text": "Clark et al. (2019)", "ref_id": "BIBREF21" }, { "start": 260, "end": 265, "text": "[SEP]", "ref_id": null }, { "start": 326, "end": 348, "text": "Kovaleva et al. (2019)", "ref_id": "BIBREF69" } ], "ref_spans": [ { "start": 381, "end": 389, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Self-attention Heads", "sec_num": "4.2" }, { "text": "The ''heterogeneous'' attention pattern shown in Figure 3 could potentially be linguistically interpretable, and a number of studies focused on identifying the functions of self-attention heads. In particular, some BERT heads seem to specialize in certain types of syntactic relations. Htut et al. (2019) and Clark et al. (2019) report that there are BERT heads that attended significantly more than a random baseline to words in certain syntactic positions. The datasets and methods used in these studies differ, but they both find that there are heads that attend to words in obj role more than the positional baseline. The evidence for nsubj, advmod, and amod varies between these two studies. The overall conclusion is also supported by Voita et al.'s (2019b) study of the base Transformer in machine translation context. Hoover et al. (2019) hypothesize that even complex dependencies like dobj are encoded by a combination of heads rather than a single head, but this work is limited to qualitative analysis. Zhao and Bethard (2020) looked specifically for the heads encoding negation scope.", "cite_spans": [ { "start": 309, "end": 328, "text": "Clark et al. (2019)", "ref_id": "BIBREF21" }, { "start": 826, "end": 846, "text": "Hoover et al. (2019)", "ref_id": "BIBREF54" }, { "start": 1015, "end": 1038, "text": "Zhao and Bethard (2020)", "ref_id": "BIBREF185" } ], "ref_spans": [ { "start": 49, "end": 57, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Heads With Linguistic Functions", "sec_num": "4.2.1" }, { "text": "Both Clark et al. (2019) and Htut et al. (2019) conclude that no single head has the complete syntactic tree information, in line with evidence of partial knowledge of syntax (cf. subsection 3.1). However, Clark et al. (2019) identify a BERT head that can be directly used as a classifier to perform coreference resolution on par with a rule-based system, which by itself would seem to require quite a lot of syntactic knowledge. present evidence that attention weights are weak indicators of subject-verb agreement and reflexive anaphora. Instead of serving as strong pointers between tokens that should be related, BERT's self-attention weights were close to a uniform attention baseline, but there was some sensitivity to different types of distractors coherent with psycholinguistic data. This is consistent with conclusions by Ettinger (2019).", "cite_spans": [ { "start": 5, "end": 24, "text": "Clark et al. (2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Heads With Linguistic Functions", "sec_num": "4.2.1" }, { "text": "To our knowledge, morphological information in BERT heads has not been addressed, but with the sparse attention variant by Correia et al. (2019) in the base Transformer, some attention heads appear to merge BPE-tokenized words. For semantic relations, there are reports of selfattention heads encoding core frame-semantic relations (Kovaleva et al., 2019) , as well as lexicographic and commonsense relations .", "cite_spans": [ { "start": 332, "end": 355, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Heads With Linguistic Functions", "sec_num": "4.2.1" }, { "text": "The overall popularity of self-attention as an interpretability mechanism is due to the idea that ''attention weight has a clear meaning: how much a particular word will be weighted when computing the next representation for the current word'' (Clark et al., 2019) . This view is currently debated (Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019; Brunner et al., 2020) , and in a multilayer model where attention is followed by nonlinear transformations, the patterns in individual heads do not provide a full picture. Also, although many current papers are accompanied by attention visualizations, and there is a growing number of visualization tools (Vig, 2019; Hoover et al., 2019) , the visualization is typically limited to qualitative analysis (often with cherry-picked examples) (Belinkov and Glass, 2019) , and should not be interpreted as definitive evidence.", "cite_spans": [ { "start": 244, "end": 264, "text": "(Clark et al., 2019)", "ref_id": "BIBREF21" }, { "start": 298, "end": 322, "text": "(Jain and Wallace, 2019;", "ref_id": "BIBREF57" }, { "start": 323, "end": 347, "text": "Serrano and Smith, 2019;", "ref_id": "BIBREF121" }, { "start": 348, "end": 375, "text": "Wiegreffe and Pinter, 2019;", "ref_id": "BIBREF165" }, { "start": 376, "end": 397, "text": "Brunner et al., 2020)", "ref_id": "BIBREF18" }, { "start": 681, "end": 692, "text": "(Vig, 2019;", "ref_id": "BIBREF147" }, { "start": 693, "end": 713, "text": "Hoover et al., 2019)", "ref_id": "BIBREF54" }, { "start": 815, "end": 841, "text": "(Belinkov and Glass, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Heads With Linguistic Functions", "sec_num": "4.2.1" }, { "text": "Kovaleva et al. 2019show that most selfattention heads do not directly encode any non-trivial linguistic information, at least when fine-tuned on GLUE (Wang et al., 2018) , since only fewer than 50% of heads exhibit the ''heterogeneous'' pattern. Much of the model produced the vertical pattern (attention to [CLS], [SEP] , and punctuation tokens), consistent with the observations by Clark et al. (2019) . This redundancy is likely related to the overparameterization issue (see section 6).", "cite_spans": [ { "start": 151, "end": 170, "text": "(Wang et al., 2018)", "ref_id": "BIBREF155" }, { "start": 316, "end": 321, "text": "[SEP]", "ref_id": null }, { "start": 385, "end": 404, "text": "Clark et al. (2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Attention to Special Tokens", "sec_num": "4.2.2" }, { "text": "More recently, Kobayashi et al. (2020) showed that the norms of attention-weighted input vectors, which yield a more intuitive interpretation of selfattention, reduce the attention to special tokens. However, even when the attention weights are normed, it is still not the case that most heads that do the ''heavy lifting'' are even potentially interpretable (Prasanna et al., 2020) .", "cite_spans": [ { "start": 15, "end": 38, "text": "Kobayashi et al. (2020)", "ref_id": "BIBREF66" }, { "start": 359, "end": 382, "text": "(Prasanna et al., 2020)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Attention to Special Tokens", "sec_num": "4.2.2" }, { "text": "One methodological choice in in many studies of attention is to focus on inter-word attention and simply exclude special tokens (e.g., and Htut et al. [2019] ). However, if attention to special tokens actually matters at inference time, drawing conclusions purely from inter-word attention patterns does not seem warranted.", "cite_spans": [ { "start": 151, "end": 157, "text": "[2019]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Attention to Special Tokens", "sec_num": "4.2.2" }, { "text": "The functions of special tokens are not yet well understood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention to Special Tokens", "sec_num": "4.2.2" }, { "text": "[CLS] is typically viewed as an aggregated sentence-level representation (although all token representations also contain at least some sentence-level information, as discussed in subsection 4.1); in that case, we may not see, for example, full syntactic trees in inter-word atten-tion because part of that information is actually packed in [CLS] . Clark et al. (2019) experiment with encoding Wikipedia paragraphs with base BERT to consider specifically the attention to special tokens, noting that heads in early layers attend more to [CLS], in middle layers to [SEP] , and in final layers to periods and commas. They hypothesize that its function might be one of ''no-op'', a signal to ignore the head if its pattern is not applicable to the current case. As a result, for example, [SEP] gets increased attention starting in layer 5, but its importance for prediction drops. However, after fine-tuning both [SEP] and [CLS] get a lot of attention, depending on the task (Kovaleva et al., 2019) . Interestingly, BERT also pays a lot of attention to punctuation, which Clark et al. (2019) explain by the fact that periods and commas are simply almost as frequent as the special tokens, and so the model might learn to rely on them for the same reasons.", "cite_spans": [ { "start": 341, "end": 346, "text": "[CLS]", "ref_id": null }, { "start": 349, "end": 368, "text": "Clark et al. (2019)", "ref_id": "BIBREF21" }, { "start": 564, "end": 569, "text": "[SEP]", "ref_id": null }, { "start": 785, "end": 790, "text": "[SEP]", "ref_id": null }, { "start": 972, "end": 995, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF69" }, { "start": 1069, "end": 1088, "text": "Clark et al. (2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Attention to Special Tokens", "sec_num": "4.2.2" }, { "text": "The first layer of BERT receives as input a combination of token, segment, and positional embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT Layers", "sec_num": "4.3" }, { "text": "It stands to reason that the lower layers have the most information about linear word order. Lin et al. (2019) report a decrease in the knowledge of linear word order around layer 4 in BERT-base. This is accompanied by an increased knowledge of hierarchical sentence structure, as detected by the probing tasks of predicting the token index, the main auxiliary verb and the sentence subject.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT Layers", "sec_num": "4.3" }, { "text": "There is a wide consensus in studies with different tasks, datasets, and methodologies that syntactic information is most prominent in the middle layers of BERT. 4 Hewitt and Manning (2019) had the most success reconstructing syntactic tree depth from the middle BERT layers (6-9 for base-BERT, 14-19 for BERT-large). Goldberg (2019) reports the best subject-verb agreement around layers 8-9, and the performance on syntactic probing tasks used by Jawahar et al. (2019) also seems to peak around the middle of the model. The prominence of syntactic information in the middle BERT layers is related to Liu et al.'s (2019a) observation that the middle layers of Transformers are best-performing overall and the most transferable across tasks (see Figure 4 ).", "cite_spans": [ { "start": 162, "end": 163, "text": "4", "ref_id": null }, { "start": 164, "end": 189, "text": "Hewitt and Manning (2019)", "ref_id": "BIBREF52" } ], "ref_spans": [ { "start": 745, "end": 753, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "BERT Layers", "sec_num": "4.3" }, { "text": "There is conflicting evidence about syntactic chunks. Tenney et al. (2019a) conclude that ''the basic syntactic information appears earlier in the network while high-level semantic features appear at the higher layers'', drawing parallels between this order and the order of components in a typical NLP pipeline-from POS-tagging to dependency parsing to semantic role labeling. Jawahar et al. (2019) also report that the lower layers were more useful for chunking, while middle layers were more useful for parsing. At the same time, the probing experiments by Liu et al. (2019a) find the opposite: Both POS-tagging and chunking were performed best at the middle layers, in both BERT-base and BERT-large. However, all three studies use different suites of probing tasks.", "cite_spans": [ { "start": 54, "end": 75, "text": "Tenney et al. (2019a)", "ref_id": "BIBREF139" }, { "start": 378, "end": 399, "text": "Jawahar et al. (2019)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "BERT Layers", "sec_num": "4.3" }, { "text": "The final layers of BERT are the most taskspecific. In pre-training, this means specificity to the MLM task, which explains why the middle layers are more transferable . In fine-tuning, it explains why the final layers change the most (Kovaleva et al., 2019) , and why restoring the weights of lower layers of fine-tuned BERT to their original values does not dramatically hurt the model performance (Hao et al., 2019) . Tenney et al. (2019a) suggest that whereas syntactic information appears early in the model and can be localized, semantics is spread across the entire model, which explains why certain non-trivial examples get solved incorrectly at first but correctly at the later layers. This is rather to be expected: Semantics permeates all language, and linguists debate whether meaningless structures can exist at all (Goldberg, 2006, p.166-182) . But this raises the question of what stacking more Transformer layers in BERT actually achieves in terms of the spread of semantic knowledge, and whether that is beneficial. Tenney et al. compared BERT-base and BERT-large, and found that the overall pattern of cumulative score gains is the same, only more spread out in the larger model.", "cite_spans": [ { "start": 235, "end": 258, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF69" }, { "start": 400, "end": 418, "text": "(Hao et al., 2019)", "ref_id": "BIBREF51" }, { "start": 421, "end": 442, "text": "Tenney et al. (2019a)", "ref_id": "BIBREF139" }, { "start": 829, "end": 856, "text": "(Goldberg, 2006, p.166-182)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "BERT Layers", "sec_num": "4.3" }, { "text": "Note that Tenney et al.'s (2019a) experiments concern sentence-level semantic relations; report that the encoding of ConceptNet semantic relations is the worst in the early layers and increases towards the top. Jawahar et al. (2019) place ''surface features in lower layers, syntactic features in middle layers and semantic features in higher layers'', but their conclusion is surprising, given that only one semantic task in this study actually topped at the last layer, and three others peaked around the middle and then considerably degraded by the final layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT Layers", "sec_num": "4.3" }, { "text": "This section reviews the proposals to optimize the training and architecture of the original BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training BERT", "sec_num": "5" }, { "text": "To date, the most systematic study of BERT architecture was performed by , who experimented with the number of layers, heads, and model parameters, varying one option and freezing the others. They concluded that the number of heads was not as significant as the number of layers. That is consistent with the findings of Voita et al. (2019b) and Michel et al. (2019) (section 6), and also the observation by that the middle layers were the most transferable. Larger hidden representation size was consistently better, but the gains varied by setting.", "cite_spans": [ { "start": 320, "end": 340, "text": "Voita et al. (2019b)", "ref_id": "BIBREF151" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture Choices", "sec_num": "5.1" }, { "text": "All in all, changes in the number of heads and layers appear to perform different functions. The issue of model depth must be related to the information flow from the most task-specific layers closer to the classifier , to the initial layers which appear to be the most taskinvariant (Hao et al., 2019) , and where the tokens resemble the input tokens the most (Brunner et al., 2020 ) (see subsection 4.3). If that is the case, a deeper model has more capacity to encode information that is not task-specific.", "cite_spans": [ { "start": 284, "end": 302, "text": "(Hao et al., 2019)", "ref_id": "BIBREF51" }, { "start": 361, "end": 382, "text": "(Brunner et al., 2020", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture Choices", "sec_num": "5.1" }, { "text": "On the other hand, many self-attention heads in vanilla BERT seem to naturally learn the same patterns (Kovaleva et al., 2019) . This explains why pruning them does not have too much impact. The question that arises from this is how far we could get with intentionally encouraging diverse self-attention patterns: Theoretically, this would mean increasing the amount of information in the model with the same number of weights. Raganato et al. (2020) show for Transformer-based machine translation we can simply pre-set the patterns that we already know the model would learn, instead of learning them from scratch.", "cite_spans": [ { "start": 103, "end": 126, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF69" }, { "start": 428, "end": 450, "text": "Raganato et al. (2020)", "ref_id": "BIBREF107" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture Choices", "sec_num": "5.1" }, { "text": "Vanilla BERT is symmetric and balanced in terms of self-attention and feed-forward layers, but it may not have to be. For the base Transformer, Press et al. (2020) report benefits from more self-attention sublayers at the bottom and more feedforward sublayers at the top.", "cite_spans": [ { "start": 144, "end": 163, "text": "Press et al. (2020)", "ref_id": "BIBREF104" } ], "ref_spans": [], "eq_spans": [], "section": "Model Architecture Choices", "sec_num": "5.1" }, { "text": "Liu et al. (2019b) demonstrate the benefits of large-batch training: With 8k examples, both the language model perplexity and downstream task performance are improved. They also publish their recommendations for other parameters. You et al. (2019) report that with a batch size of 32k BERT's training time can be significantly reduced with no degradation in performance. observe that the normalization of the trained [CLS] token stabilizes the training and slightly improves performance on text classification tasks. Gong et al. (2019) note that, because selfattention patterns in higher and lower layers are similar, the model training can be done in a recursive manner, where the shallower version is trained first and then the trained parameters are copied to deeper layers. Such a ''warm-start'' can lead to a 25% faster training without sacrificing performance.", "cite_spans": [ { "start": 230, "end": 247, "text": "You et al. (2019)", "ref_id": "BIBREF178" }, { "start": 517, "end": 535, "text": "Gong et al. (2019)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Improvements to the Training Regime", "sec_num": "5.2" }, { "text": "The original BERT is a bidirectional Transformer pre-trained on two tasks: NSP and MLM (section 2). Multiple studies have come up with alternative training objectives to improve on BERT, and these could be categorized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "\u2022 How to mask. Raffel et al. (2019) systematically experiment with corruption rate and corrupted span length. Liu et al. (2019b) propose diverse masks for training examples within an epoch, while Baevski et al. (2019) mask every token in a sequence instead of a random selection. Clinchant et al. (2019) replace the MASK token with [UNK] token, to help the model learn a representation for unknowns that could be useful for translation. maximize the amount of information available to the model by conditioning on both masked and unmasked tokens, and letting the model see how many tokens are missing.", "cite_spans": [ { "start": 110, "end": 128, "text": "Liu et al. (2019b)", "ref_id": "BIBREF82" }, { "start": 196, "end": 217, "text": "Baevski et al. (2019)", "ref_id": "BIBREF7" }, { "start": 280, "end": 303, "text": "Clinchant et al. (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "\u2022 What to mask. Masks can be applied to full words instead of word-pieces (Devlin et al., 2019; Cui et al., 2019) . Similarly, we can mask spans rather than single tokens (Joshi et al., 2020) , predicting how many are missing . Masking phrases and named entities (Sun et al., 2019b) improves representation of structured knowledge.", "cite_spans": [ { "start": 74, "end": 95, "text": "(Devlin et al., 2019;", "ref_id": "BIBREF31" }, { "start": 96, "end": 113, "text": "Cui et al., 2019)", "ref_id": "BIBREF28" }, { "start": 171, "end": 191, "text": "(Joshi et al., 2020)", "ref_id": "BIBREF63" }, { "start": 263, "end": 282, "text": "(Sun et al., 2019b)", "ref_id": "BIBREF131" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "\u2022 Where to mask. Lample and Conneau (2019) use arbitrary text streams instead of sentence pairs and subsample frequent outputs similar to Mikolov et al. (2013) . Bao et al. (2020) combine the standard autoencoding MLM with partially autoregressive LM objective using special pseudo mask tokens.", "cite_spans": [ { "start": 17, "end": 42, "text": "Lample and Conneau (2019)", "ref_id": "BIBREF73" }, { "start": 138, "end": 159, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF93" }, { "start": 162, "end": 179, "text": "Bao et al. (2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "\u2022 Alternatives to masking. Raffel et al. (2019) experiment with replacing and dropping spans; explore deletion, infilling, sentence permutation and document rotation; and Sun et al. (2019c) predict whether a token is capitalized and whether it occurs in other segments of the same document. Yang et al. (2019) train on different permutations of word order in the input sequence, maximizing the probability of the original word order (cf. the n-gram word order reconstruction task (Wang et al., 2019a) ). detects tokens that were replaced by a generator network rather than masked.", "cite_spans": [ { "start": 27, "end": 47, "text": "Raffel et al. (2019)", "ref_id": "BIBREF106" }, { "start": 171, "end": 189, "text": "Sun et al. (2019c)", "ref_id": "BIBREF132" }, { "start": 291, "end": 309, "text": "Yang et al. (2019)", "ref_id": "BIBREF178" }, { "start": 480, "end": 500, "text": "(Wang et al., 2019a)", "ref_id": "BIBREF157" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "\u2022 NSP alternatives. Removing NSP does not hurt or slightly improves performance (Liu et al., 2019b; Joshi et al., 2020; Clinchant et al., 2019) . Wang et al. (2019a) and replace NSP with the task of predicting both the next and the previous sentences. Lan et al. (2020) replace the negative NSP examples by swapped sentences from positive examples, rather than sentences from different documents. ERNIE 2.0 includes sentence reordering and sentence distance prediction. replace both NSP and token position embeddings by a combination of paragraph, sentence, and token index embeddings. Li and Choi (2020) experiment with utterance order prediction task for multiparty dialogue (and also MLM at the level of utterances and the whole dialogue).", "cite_spans": [ { "start": 80, "end": 99, "text": "(Liu et al., 2019b;", "ref_id": "BIBREF82" }, { "start": 100, "end": 119, "text": "Joshi et al., 2020;", "ref_id": "BIBREF63" }, { "start": 120, "end": 143, "text": "Clinchant et al., 2019)", "ref_id": "BIBREF23" }, { "start": 146, "end": 165, "text": "Wang et al. (2019a)", "ref_id": "BIBREF157" }, { "start": 252, "end": 269, "text": "Lan et al. (2020)", "ref_id": null }, { "start": 586, "end": 604, "text": "Li and Choi (2020)", "ref_id": "BIBREF77" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "\u2022 Other tasks. Sun et al. (2019c) propose simultaneous learning of seven tasks, including discourse relation classification and predicting whether a segment is relevant for IR. Guu et al. (2020) include a latent knowledge retriever in language model pretraining. Wang et al. (2020c) combine MLM with a knowledge base completion objective. Glass et al. (2020) replace MLM with span prediction task (as in extractive question answering), where the model is expected to provide the answer not from its own weights, but from a different passage containing the correct answer (a relevant search engine query snippet).", "cite_spans": [ { "start": 15, "end": 33, "text": "Sun et al. (2019c)", "ref_id": "BIBREF132" }, { "start": 177, "end": 194, "text": "Guu et al. (2020)", "ref_id": "BIBREF50" }, { "start": 263, "end": 282, "text": "Wang et al. (2020c)", "ref_id": "BIBREF159" }, { "start": 339, "end": 358, "text": "Glass et al. (2020)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "Another obvious source of improvement is pretraining data. Several studies explored the benefits of increasing the corpus volume (Liu et al., 2019b; Baevski et al., 2019) and longer training (Liu et al., 2019b) . The data also does not have to be raw text: There is a number efforts to incorporate explicit linguistic information, both syntactic (Sundararaman et al., 2019) and semantic . and Kumar et al. (2020) include the label for a given sequence from an annotated task dataset. Schick and Sch\u00fctze (2020) separately learn representations for rare words.", "cite_spans": [ { "start": 129, "end": 148, "text": "(Liu et al., 2019b;", "ref_id": "BIBREF82" }, { "start": 149, "end": 170, "text": "Baevski et al., 2019)", "ref_id": "BIBREF7" }, { "start": 191, "end": 210, "text": "(Liu et al., 2019b)", "ref_id": "BIBREF82" }, { "start": 346, "end": 373, "text": "(Sundararaman et al., 2019)", "ref_id": "BIBREF134" }, { "start": 393, "end": 412, "text": "Kumar et al. (2020)", "ref_id": "BIBREF71" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "Although BERT is already actively used as a source of world knowledge (see subsection 3.3), there is also work on explicitly supplying structured knowledge. One approach is entityenhanced models. For example, Peters et al. embeddings, but through the additional pretraining objective of knowledge base completion. Sun et al. (2019b,c) modify the standard MLM task to mask named entities rather than random words, and Yin et al. (2020) train with MLM objective over both text and linearized table data. Wang et al. (2020a) enhance RoBERTa with both linguistic and factual knowledge with task-specific adapters.", "cite_spans": [ { "start": 314, "end": 334, "text": "Sun et al. (2019b,c)", "ref_id": null }, { "start": 417, "end": 434, "text": "Yin et al. (2020)", "ref_id": "BIBREF175" }, { "start": 502, "end": 521, "text": "Wang et al. (2020a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "Pre-training is the most expensive part of training BERT, and it would be informative to know how much benefit it provides. On some tasks, a randomly initialized and fine-tuned BERT obtains competitive or higher results than the pre-trained BERT with the task classifier and frozen weights (Kovaleva et al., 2019) . The consensus in the community is that pre-training does help in most situations, but the degree and its exact contribution requires further investigation. Prasanna et al. (2020) found that most weights of pre-trained BERT are useful in fine-tuning, although there are ''better'' and ''worse'' subnetworks. One explanation is that pre-trained weights help the finetuned BERT find wider and flatter areas with smaller generalization error, which makes the model more robust to overfitting (see Figure 5 from Hao et al. [2019] ).", "cite_spans": [ { "start": 290, "end": 313, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF69" }, { "start": 472, "end": 494, "text": "Prasanna et al. (2020)", "ref_id": "BIBREF103" }, { "start": 823, "end": 840, "text": "Hao et al. [2019]", "ref_id": "BIBREF51" } ], "ref_spans": [ { "start": 809, "end": 817, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "Given the large number and variety of proposed modifications, one would wish to know how much impact each of them has. However, due to the overall trend towards large model sizes, systematic ablations have become expensive. Most new models claim superiority on standard benchmarks, but gains are often marginal, and estimates of model stability and significance testing are very rare.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-training BERT", "sec_num": "5.3" }, { "text": "Pre-training + fine-tuning workflow is a crucial part of BERT. The former is supposed to provide task-independent knowledge, and the latter would presumably teach the model to rely more on the representations useful for the task at hand. Kovaleva et al. (2019) did not find that to be the case for BERT fine-tuned on GLUE tasks: 5 during fine-tuning, the most changes for three epochs occurred in the last two layers of the models, but those changes caused self-attention to focus on Several studies explored the possibilities of improving the fine-tuning of BERT:", "cite_spans": [ { "start": 238, "end": 260, "text": "Kovaleva et al. (2019)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "\u2022 Taking more layers into account: learning a complementary representation of the information in deep and output layers , using a weighted combination of all layers instead of the final one (Su and Cheng, 2019; Kondratyuk and Straka, 2019) , and layer dropout (Kondratyuk and Straka, 2019) .", "cite_spans": [ { "start": 190, "end": 210, "text": "(Su and Cheng, 2019;", "ref_id": "BIBREF128" }, { "start": 211, "end": 239, "text": "Kondratyuk and Straka, 2019)", "ref_id": "BIBREF67" }, { "start": 260, "end": 289, "text": "(Kondratyuk and Straka, 2019)", "ref_id": "BIBREF67" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "\u2022 Two-stage fine-tuning introduces an intermediate supervised training stage between pre-training and fine-tuning Arase and Tsujii, 2019; Pruksachatkun et al., 2020; Glava\u0161 and Vuli\u0107, 2020) . Ben-David et al. (2020) propose a pivot-based variant of MLM to fine-tune BERT for domain adaptation.", "cite_spans": [ { "start": 114, "end": 137, "text": "Arase and Tsujii, 2019;", "ref_id": "BIBREF2" }, { "start": 138, "end": 165, "text": "Pruksachatkun et al., 2020;", "ref_id": "BIBREF105" }, { "start": 166, "end": 189, "text": "Glava\u0161 and Vuli\u0107, 2020)", "ref_id": "BIBREF43" }, { "start": 192, "end": 215, "text": "Ben-David et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "\u2022 Adversarial token perturbations improve the robustness of the model (Zhu et al., 2019) .", "cite_spans": [ { "start": 70, "end": 88, "text": "(Zhu et al., 2019)", "ref_id": "BIBREF188" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "\u2022 Adversarial regularization in combination with Bregman Proximal Point Optimization helps alleviate pre-trained knowledge forgetting and therefore prevents BERT from overfitting to downstream tasks (Jiang et al., 2019a) .", "cite_spans": [ { "start": 199, "end": 220, "text": "(Jiang et al., 2019a)", "ref_id": "BIBREF59" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "\u2022 Mixout regularization improves the stability of BERT fine-tuning even for a small number of training examples .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "With large models, even fine-tuning becomes expensive, but Houlsby et al. (2019) show that it can be successfully approximated with adapter modules. They achieve competitive performance on 26 classification tasks at a fraction of the computational cost. Adapters in BERT were also used for multitask learning (Stickland and Murray, 2019) and cross-lingual transfer (Artetxe et al., 2019) . An alternative to fine-tuning is extracting features from frozen representations, but finetuning works better for BERT (Peters et al., 2019b) .", "cite_spans": [ { "start": 59, "end": 80, "text": "Houlsby et al. (2019)", "ref_id": "BIBREF55" }, { "start": 309, "end": 337, "text": "(Stickland and Murray, 2019)", "ref_id": "BIBREF126" }, { "start": 365, "end": 387, "text": "(Artetxe et al., 2019)", "ref_id": "BIBREF4" }, { "start": 509, "end": 531, "text": "(Peters et al., 2019b)", "ref_id": "BIBREF98" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "A big methodological challenge in the current NLP is that the reported performance improvements of new models may well be within variation induced by environment factors (Crane, 2018) . BERT is not an exception. Dodge et al. (2020) report significant variation for BERT fine-tuned on GLUE tasks due to both weight initialization and training data order. They also propose early stopping on the less-promising seeds.", "cite_spans": [ { "start": 170, "end": 183, "text": "(Crane, 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "Although we hope that the above observations may be useful for the practitioners, this section does not exhaust the current research on finetuning and its alternatives. For example, we do not cover such topics as Siamese architectures, policy gradient training, automated curriculum learning, and others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "6 How Big Should BERT Be?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning BERT", "sec_num": "5.4" }, { "text": "Transformer-based models keep growing by orders of magnitude: The 110M parameters of base BERT are now dwarfed by 17B parameters of Turing-NLG (Microsoft, 2020) , which is dwarfed by 175B of GPT-3 (Brown et al., 2020) . This trend raises concerns about computational complexity of self-attention , environmental issues (Strubell et al., 2019; Schwartz et al., 2019) , fair comparison of architectures (A\u00dfenmacher and Heumann, 2020) , and reproducibility.", "cite_spans": [ { "start": 143, "end": 160, "text": "(Microsoft, 2020)", "ref_id": "BIBREF92" }, { "start": 197, "end": 217, "text": "(Brown et al., 2020)", "ref_id": null }, { "start": 319, "end": 342, "text": "(Strubell et al., 2019;", "ref_id": "BIBREF127" }, { "start": 343, "end": 365, "text": "Schwartz et al., 2019)", "ref_id": "BIBREF120" }, { "start": 401, "end": 431, "text": "(A\u00dfenmacher and Heumann, 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Overparameterization", "sec_num": "6.1" }, { "text": "Human language is incredibly complex, and would perhaps take many more parameters to describe fully, but the current models do not make good use of the parameters they already have. Voita et al. (2019b) showed that all but a few Transformer heads could be pruned without DistilBERT (Sanh et al., 2019) \u00d71.5 90% \u00a7 \u00d71.6 BERT 6 All GLUE tasks, SQuAD BERT 6 -PKD (Sun et al., 2019a) \u00d71.6 98% \u00d71.9 BERT 6 No WNLI, CoLA, STS-B; RACE BERT 3 -PKD (Sun et al., 2019a) \u00d72.4 92% \u00d73.7 BERT 3 No WNLI, CoLA, STS-B; RACE Aguilar et al. (2019), Exp. 3 \u00d71.6 93% \u2212 BERT 6 CoLA, MRPC, QQP, RTE BERT-48 \u00d762 87% \u00d777 BERT 12 * \u2020 MNLI, MRPC, SST-2 BERT-192 (Zhao et al., 2019) \u00d75.7 93% \u00d722 BERT 12 * \u2020 MNLI, MRPC, SST-2 TinyBERT (Jiao et al., 2019) \u00d77.5 96% \u00d79.4 BERT 4 \u2020 No WNLI; SQuAD MobileBERT (Sun et al., 2020) \u00d74.3 100% \u00d74 BERT 24 \u2020 No WNLI; SQuAD PD (Turc et al., 2019) \u00d71.6 98% \u00d72.5 \u2021 BERT 6 \u2020 No WNLI, CoLA and STS-B WaLDORf (Tian et al., 2019) \u00d74.4 93% \u00d79 BERT 8 \u2020 SQuAD MiniLM (Wang et al., 2020b) \u00d71.65 99% \u00d72 BERT 6 No WNLI, STS-B, MNLI mm ; SQuAD MiniBERT (Tsai et al., 2019) \u00d76 * * 98% \u00d727 * * mBERT 3 \u2020 CoNLL-18 POS and morphology BiLSTM-soft (Tang et al., (Lan et al., 2020) \u00d70.47 107% \u2212 BERT 12 \u2020 MNLI, SST-2 BERT-of-Theseus \u00d71.6 98% \u00d71.9 BERT 6 No WNLI PoWER-BERT (Goyal et al., 2020) N/A 99% \u00d72-4.5 BERT 12 No WNLI; RACE Clark et al. (2019) observe that most heads in the same layer show similar self-attention patterns (perhaps related to the fact that the output of all self-attention heads in a layer is passed through the same MLP), which explains why Michel et al. (2019) were able to reduce most layers to a single head.", "cite_spans": [ { "start": 182, "end": 202, "text": "Voita et al. (2019b)", "ref_id": "BIBREF151" }, { "start": 282, "end": 301, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF116" }, { "start": 359, "end": 378, "text": "(Sun et al., 2019a)", "ref_id": "BIBREF130" }, { "start": 439, "end": 458, "text": "(Sun et al., 2019a)", "ref_id": "BIBREF130" }, { "start": 502, "end": 536, "text": "RACE Aguilar et al. (2019), Exp. 3", "ref_id": null }, { "start": 626, "end": 654, "text": "BERT-192 (Zhao et al., 2019)", "ref_id": null }, { "start": 707, "end": 726, "text": "(Jiao et al., 2019)", "ref_id": "BIBREF61" }, { "start": 776, "end": 794, "text": "(Sun et al., 2020)", "ref_id": "BIBREF133" }, { "start": 836, "end": 855, "text": "(Turc et al., 2019)", "ref_id": "BIBREF144" }, { "start": 913, "end": 932, "text": "(Tian et al., 2019)", "ref_id": "BIBREF141" }, { "start": 967, "end": 987, "text": "(Wang et al., 2020b)", "ref_id": "BIBREF158" }, { "start": 1049, "end": 1068, "text": "(Tsai et al., 2019)", "ref_id": "BIBREF143" }, { "start": 1138, "end": 1151, "text": "(Tang et al.,", "ref_id": null }, { "start": 1152, "end": 1170, "text": "(Lan et al., 2020)", "ref_id": null }, { "start": 1262, "end": 1282, "text": "(Goyal et al., 2020)", "ref_id": null }, { "start": 1320, "end": 1339, "text": "Clark et al. (2019)", "ref_id": "BIBREF21" }, { "start": 1555, "end": 1575, "text": "Michel et al. (2019)", "ref_id": "BIBREF90" } ], "ref_spans": [], "eq_spans": [], "section": "Overparameterization", "sec_num": "6.1" }, { "text": "Depending on the task, some BERT heads/ layers are not only redundant , but also harmful to the downstream task performance. Positive effect from head disabling was reported for machine translation (Michel et al., 2019) , abstractive summarization (Baan et al., 2019) , and GLUE tasks (Kovaleva et al., 2019) . Additionally, Tenney et al. (2019a) examine the cumulative gains of their structural probing classifier, observing that in 5 out of 8 probing tasks some layers cause a drop in scores (typically in the final layers). Gordon et al. (2020) find that 30%-40% of the weights can be pruned without impact on downstream tasks.", "cite_spans": [ { "start": 198, "end": 219, "text": "(Michel et al., 2019)", "ref_id": "BIBREF90" }, { "start": 248, "end": 267, "text": "(Baan et al., 2019)", "ref_id": "BIBREF6" }, { "start": 285, "end": 308, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF69" }, { "start": 325, "end": 346, "text": "Tenney et al. (2019a)", "ref_id": "BIBREF139" }, { "start": 527, "end": 547, "text": "Gordon et al. (2020)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Overparameterization", "sec_num": "6.1" }, { "text": "In general, larger BERT models perform better Roberts et al., 2020) , but not always: BERT-base outperformed BERT-large on subject-verb agreement (Goldberg, 2019) and sentence subject detection . Given the complexity of language, and amounts of pretraining data, it is not clear why BERT ends up with redundant heads and layers. Clark et al. (2019) suggest that one possible reason is the use of attention dropouts, which causes some attention weights to be zeroed-out during training.", "cite_spans": [ { "start": 46, "end": 67, "text": "Roberts et al., 2020)", "ref_id": null }, { "start": 146, "end": 162, "text": "(Goldberg, 2019)", "ref_id": "BIBREF45" }, { "start": 329, "end": 348, "text": "Clark et al. (2019)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Overparameterization", "sec_num": "6.1" }, { "text": "Given the above evidence of overparameterization, it does not come as a surprise that BERT can be efficiently compressed with minimal accuracy loss, which would be highly desirable for real-world applications. Such efforts to date are summarized in Table 1 . The main approaches are knowledge distillation, quantization, and pruning.", "cite_spans": [], "ref_spans": [ { "start": 249, "end": 256, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Compression Techniques", "sec_num": "6.2" }, { "text": "The studies in the knowledge distillation framework (Hinton et al., 2014 ) use a smaller student-network trained to mimic the behavior of a larger teacher-network. For BERT, this has been achieved through experiments with loss functions (Sanh et al., 2019; Jiao et al., 2019) , mimicking the activation patterns of individual portions of the teacher network (Sun et al., 2019a) , and knowledge transfer at the pre-training (Turc et al., 2019; Jiao et al., 2019; Sun et al., 2020) or finetuning stage (Jiao et al., 2019) . McCarley et al. (2020) suggest that distillation has so far worked better for GLUE than for reading comprehension, and report good results for QA from a combination of structured pruning and task-specific distillation.", "cite_spans": [ { "start": 52, "end": 72, "text": "(Hinton et al., 2014", "ref_id": "BIBREF53" }, { "start": 237, "end": 256, "text": "(Sanh et al., 2019;", "ref_id": "BIBREF116" }, { "start": 257, "end": 275, "text": "Jiao et al., 2019)", "ref_id": "BIBREF61" }, { "start": 358, "end": 377, "text": "(Sun et al., 2019a)", "ref_id": "BIBREF130" }, { "start": 423, "end": 442, "text": "(Turc et al., 2019;", "ref_id": "BIBREF144" }, { "start": 443, "end": 461, "text": "Jiao et al., 2019;", "ref_id": "BIBREF61" }, { "start": 462, "end": 479, "text": "Sun et al., 2020)", "ref_id": "BIBREF133" }, { "start": 500, "end": 519, "text": "(Jiao et al., 2019)", "ref_id": "BIBREF61" }, { "start": 522, "end": 544, "text": "McCarley et al. (2020)", "ref_id": "BIBREF86" } ], "ref_spans": [], "eq_spans": [], "section": "Compression Techniques", "sec_num": "6.2" }, { "text": "Quantization decreases BERT's memory footprint through lowering the precision of its weights (Shen et al., 2019; Zafrir et al., 2019) . Note that this strategy often requires compatible hardware.", "cite_spans": [ { "start": 93, "end": 112, "text": "(Shen et al., 2019;", "ref_id": "BIBREF134" }, { "start": 113, "end": 133, "text": "Zafrir et al., 2019)", "ref_id": "BIBREF180" } ], "ref_spans": [], "eq_spans": [], "section": "Compression Techniques", "sec_num": "6.2" }, { "text": "As discussed in section 6, individual selfattention heads and BERT layers can be disabled without significant drop in performance (Michel et al., 2019; Kovaleva et al., 2019; Baan et al., 2019) . Pruning is a compression technique that takes advantage of that fact, typically reducing the amount of computation via zeroing out of certain parts of the large model. In structured pruning, architecture blocks are dropped, as in LayerDrop . In unstructured, the weights in the entire model are pruned irrespective of their location, as in magnitude pruning (Chen et al., 2020) or movement pruning (Sanh et al., 2020) . Prasanna et al. (2020) and Chen et al. (2020) explore BERT from the perspective of the lottery ticket hypothesis (Frankle and Carbin, 2019) , looking specifically at the ''winning'' subnetworks in pre-trained BERT. They independently find that such subnetworks do exist, and that transferability between subnetworks for different tasks varies.", "cite_spans": [ { "start": 130, "end": 151, "text": "(Michel et al., 2019;", "ref_id": "BIBREF90" }, { "start": 152, "end": 174, "text": "Kovaleva et al., 2019;", "ref_id": "BIBREF69" }, { "start": 175, "end": 193, "text": "Baan et al., 2019)", "ref_id": "BIBREF6" }, { "start": 554, "end": 573, "text": "(Chen et al., 2020)", "ref_id": "BIBREF40" }, { "start": 594, "end": 613, "text": "(Sanh et al., 2020)", "ref_id": "BIBREF117" }, { "start": 616, "end": 638, "text": "Prasanna et al. (2020)", "ref_id": "BIBREF103" }, { "start": 643, "end": 661, "text": "Chen et al. (2020)", "ref_id": "BIBREF40" }, { "start": 729, "end": 755, "text": "(Frankle and Carbin, 2019)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Compression Techniques", "sec_num": "6.2" }, { "text": "If the ultimate goal of training BERT is compression, recommend training larger models and compressing them heavily rather than compressing smaller models lightly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compression Techniques", "sec_num": "6.2" }, { "text": "Other techniques include decomposing BERT's embedding matrix into smaller matrices (Lan et al., 2020) , progressive module replacing , and dynamic elimination of intermediate encoder outputs (Goyal et al., 2020) . See Ganesh et al. (2020) for a more detailed discussion of compression methods.", "cite_spans": [ { "start": 83, "end": 101, "text": "(Lan et al., 2020)", "ref_id": null }, { "start": 191, "end": 211, "text": "(Goyal et al., 2020)", "ref_id": null }, { "start": 218, "end": 238, "text": "Ganesh et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Compression Techniques", "sec_num": "6.2" }, { "text": "There is a nascent discussion around pruning as a model analysis technique. The basic idea is that a compressed model a priori consists of elements that are useful for prediction; therefore by finding out what they do we may find out what the whole network does. For instance, BERT has heads that seem to encode frame-semantic relations, but disabling them might not hurt downstream task performance (Kovaleva et al., 2019) ; this suggests that this knowledge is not actually used.", "cite_spans": [ { "start": 400, "end": 423, "text": "(Kovaleva et al., 2019)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Pruning and Model Analysis", "sec_num": "6.3" }, { "text": "For the base Transformer, Voita et al. (2019b) identify the functions of self-attention heads and then check which of them survive the pruning, finding that the syntactic and positional heads are the last ones to go. For BERT, Prasanna et al. (2020) go in the opposite direction: pruning on the basis of importance scores, and interpreting the remaining ''good'' subnetwork. With respect to self-attention heads specifically, it does not seem to be the case that only the heads that potentially encode non-trivial linguistic patterns survive the pruning.", "cite_spans": [ { "start": 227, "end": 249, "text": "Prasanna et al. (2020)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Pruning and Model Analysis", "sec_num": "6.3" }, { "text": "The models and methodology in these studies differ, so the evidence is inconclusive. In particular, Voita et al. (2019b) find that before pruning the majority of heads are syntactic, and Prasanna et al. (2020) find that the majority of heads do not have potentially non-trivial attention patterns.", "cite_spans": [ { "start": 100, "end": 120, "text": "Voita et al. (2019b)", "ref_id": "BIBREF151" }, { "start": 187, "end": 209, "text": "Prasanna et al. (2020)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Pruning and Model Analysis", "sec_num": "6.3" }, { "text": "An important limitation of the current head and layer ablation studies (Michel et al., 2019; Kovaleva et al., 2019) is that they inherently assume that certain knowledge is contained in heads/layers. However, there is evidence of more diffuse representations spread across the full network, such as the gradual increase in accuracy on difficult semantic parsing tasks (Tenney et al., 2019a) or the absence of heads that would perform parsing ''in general'' (Clark et al., 2019; Htut et al., 2019) . If so, ablating individual components harms the weight-sharing mechanism. Conclusions from component ablations are also problematic if the same information is duplicated elsewhere in the network.", "cite_spans": [ { "start": 71, "end": 92, "text": "(Michel et al., 2019;", "ref_id": "BIBREF90" }, { "start": 93, "end": 115, "text": "Kovaleva et al., 2019)", "ref_id": "BIBREF69" }, { "start": 368, "end": 390, "text": "(Tenney et al., 2019a)", "ref_id": "BIBREF139" }, { "start": 457, "end": 477, "text": "(Clark et al., 2019;", "ref_id": "BIBREF21" }, { "start": 478, "end": 496, "text": "Htut et al., 2019)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Pruning and Model Analysis", "sec_num": "6.3" }, { "text": "BERTology has clearly come a long way, but it is fair to say we still have more questions than answers about how BERT works. In this section, we list what we believe to be the most promising directions for further research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Directions for Further Research", "sec_num": "7" }, { "text": "Benchmarks that require verbal reasoning. Although BERT enabled breakthroughs on many NLP benchmarks, a growing list of analysis papers are showing that its language skills are not as impressive as they seem. In particular, they were shown to rely on shallow heuristics in natural language inference Zellers et al., 2019; Jin et al., 2020) , reading comprehension Sugawara et al., 2020; , argument reasoning comprehension (Niven and Kao, 2019) , and text classification (Jin et al., 2020) . Such heuristics can even be used to reconstruct a non-publicly available model (Krishna et al., 2020) . As with any optimization method, if there is a shortcut in the data, we have no reason to expect BERT to not learn it. But harder datasets that cannot be resolved with shallow heuristics are unlikely to emerge if their development is not as valued as modeling work.", "cite_spans": [ { "start": 300, "end": 321, "text": "Zellers et al., 2019;", "ref_id": "BIBREF181" }, { "start": 322, "end": 339, "text": "Jin et al., 2020)", "ref_id": "BIBREF62" }, { "start": 364, "end": 386, "text": "Sugawara et al., 2020;", "ref_id": "BIBREF129" }, { "start": 422, "end": 443, "text": "(Niven and Kao, 2019)", "ref_id": "BIBREF95" }, { "start": 470, "end": 488, "text": "(Jin et al., 2020)", "ref_id": "BIBREF62" }, { "start": 570, "end": 592, "text": "(Krishna et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Directions for Further Research", "sec_num": "7" }, { "text": "Benchmarks for the full range of linguistic competence. Although the language models seem to acquire a great deal of knowledge about language, we do not currently have comprehensive stress tests for different aspects of linguistic knowledge. A step in this direction is the ''Checklist'' behavioral testing (Ribeiro et al., 2020) , the best paper at ACL 2020. Ideally, such tests would measure not only errors, but also sensitivity (Ettinger, 2019) .", "cite_spans": [ { "start": 307, "end": 329, "text": "(Ribeiro et al., 2020)", "ref_id": "BIBREF109" }, { "start": 432, "end": 448, "text": "(Ettinger, 2019)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Directions for Further Research", "sec_num": "7" }, { "text": "Developing methods to ''teach'' reasoning. While large pre-trained models have a lot of knowledge, they often fail if any reasoning needs to be performed on top of the facts they possess (Talmor et al., 2019 , see also subsection 3.3). For instance, Richardson et al. (2020) propose a method to ''teach'' BERT quantification, conditionals, comparatives, and Boolean coordination.", "cite_spans": [ { "start": 187, "end": 207, "text": "(Talmor et al., 2019", "ref_id": "BIBREF136" }, { "start": 250, "end": 274, "text": "Richardson et al. (2020)", "ref_id": "BIBREF111" } ], "ref_spans": [], "eq_spans": [], "section": "Directions for Further Research", "sec_num": "7" }, { "text": "Learning what happens at inference time. Most BERT analysis papers focus on different probes of the model, with the goal to find what the language model ''knows''. However, probing studies have limitations (subsection 3.4), and to this point, far fewer papers have focused on discovering what knowledge actually gets used. Several promising directions are the ''amnesic probing'' (Elazar et al., 2020) , identifying features important for prediction for a given task (Arkhangelskaia and Dutta, 2019) , and pruning the model to remove the non-important components (Voita et al., 2019b; Michel et al., 2019; Prasanna et al., 2020) .", "cite_spans": [ { "start": 380, "end": 401, "text": "(Elazar et al., 2020)", "ref_id": "BIBREF33" }, { "start": 467, "end": 499, "text": "(Arkhangelskaia and Dutta, 2019)", "ref_id": "BIBREF3" }, { "start": 563, "end": 584, "text": "(Voita et al., 2019b;", "ref_id": "BIBREF151" }, { "start": 585, "end": 605, "text": "Michel et al., 2019;", "ref_id": "BIBREF90" }, { "start": 606, "end": 628, "text": "Prasanna et al., 2020)", "ref_id": "BIBREF103" } ], "ref_spans": [], "eq_spans": [], "section": "Directions for Further Research", "sec_num": "7" }, { "text": "In a little over a year, BERT has become a ubiquitous baseline in NLP experiments and inspired numerous studies analyzing the model and proposing various improvements. The stream of papers seems to be accelerating rather than slowing down, and we hope that this survey helps the community to focus on the biggest unresolved questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "See also the recent findings on adversarial triggers, which get the model to produce a certain output even though they", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Voita et al. (2019a) look at the evolution of token embeddings, showing that in the earlier Transformer layers, MLM forces the acquisition of contextual information at the expense of the token identity, which gets recreated in later layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These BERT results are also compatible with findings byVig and Belinkov (2019), who report the highest attention to tokens in dependency relations in the middle layers of GPT-2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Kondratyuk and Straka (2019) suggest that fine-tuning on Universal Dependencies does result in syntactically meaningful attention patterns, but there was no quantitative evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the anonymous reviewers for their valuable feedback. This work is funded in part by NSF award number IIS-1844740 to Anna Rumshisky.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": "9" } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Pooled Contextualized Embeddings for Named Entity Recognition", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "724--728", "other_ids": { "DOI": [ "10.18653/v1/N19-1078" ] }, "num": null, "urls": [], "raw_text": "Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled Contextualized Embed- dings for Named Entity Recognition. In Pro- ceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 724-728, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/N19 -1078", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Association for Computational Linguistics", "authors": [ { "first": "Yuki", "middle": [], "last": "Arase", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5393--5404", "other_ids": { "DOI": [ "10.18653/v1/D19-1542" ] }, "num": null, "urls": [], "raw_text": "Yuki Arase and Jun'ichi Tsujii. 2019. Transfer Fine-Tuning: A BERT Case Study. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), pages 5393-5404, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1542", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Whatcha lookin'at? DeepLIFTing BERT's Attention in Question Answering", "authors": [ { "first": "Ekaterina", "middle": [], "last": "Arkhangelskaia", "suffix": "" }, { "first": "Sourav", "middle": [], "last": "Dutta", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.06431" ] }, "num": null, "urls": [], "raw_text": "Ekaterina Arkhangelskaia and Sourav Dutta. 2019. Whatcha lookin'at? DeepLIFTing BERT's Attention in Question Answering. arXiv preprint arXiv:1910.06431.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "On the Cross-lingual Transferability of Monolingual Representations", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.421" ], "arXiv": [ "arXiv:1911.03310" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the Cross-lingual Trans- ferability of Monolingual Representations. arXiv:1911.03310 [cs]. DOI: https://doi .org/10.18653/v1/2020.acl-main.421", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On the comparability of Pre-Trained Language Models", "authors": [ { "first": "Matthias", "middle": [], "last": "A\u00dfenmacher", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Heumann", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.00781" ] }, "num": null, "urls": [], "raw_text": "Matthias A\u00dfenmacher and Christian Heumann. 2020. On the comparability of Pre-Trained Language Models. arXiv:2001.00781 [cs, stat].", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Understanding Multi-Head Attention in Abstractive Summarization", "authors": [ { "first": "Joris", "middle": [], "last": "Baan", "suffix": "" }, { "first": "Marlies", "middle": [], "last": "Maartje Ter Hoeve", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Van Der Wees", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Schuth", "suffix": "" }, { "first": "", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03898" ] }, "num": null, "urls": [], "raw_text": "Joris Baan, Maartje ter Hoeve, Marlies van der Wees, Anne Schuth, and Maarten de Rijke. 2019. Understanding Multi-Head Attention in Abstractive Summarization. arXiv preprint arXiv:1911.03898.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Cloze-driven Pretraining of Self-Attention Networks", "authors": [ { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5360--5369", "other_ids": { "DOI": [ "10.18653/v1/D19-1539" ] }, "num": null, "urls": [], "raw_text": "Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze-driven Pretraining of Self-Attention Networks. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5360-5369, Hong Kong, China. Association for Compu- tational Linguistics. DOI: https://doi.org /10.18653/v1/D19-1539", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sega BERT: Pre-training of Segment-aware BERT for Language Understanding", "authors": [ { "first": "He", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Luchen", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.14996" ] }, "num": null, "urls": [], "raw_text": "He Bai, Peng Shi, Jimmy Lin, Luchen Tan, Kun Xiong, Wen Gao, and Ming Li. 2020. Sega BERT: Pre-training of Segment-aware BERT for Language Understanding. arXiv:2004. 14996 [cs].", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "What's in a Name? Are BERT Named Entity Representations just as Good for any other Name?", "authors": [ { "first": "Sriram", "middle": [], "last": "Balasubramanian", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Jindal", "suffix": "" }, { "first": "Abhijeet", "middle": [], "last": "Awasthi", "suffix": "" }, { "first": "Sunita", "middle": [], "last": "Sarawagi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "205--214", "other_ids": { "DOI": [ "10.18653/v1/2020.repl4nlp-1.24" ] }, "num": null, "urls": [], "raw_text": "Sriram Balasubramanian, Naman Jain, Gaurav Jindal, Abhijeet Awasthi, and Sunita Sarawagi. 2020. What's in a Name? Are BERT Named Entity Representations just as Good for any other Name? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 205-214, Online. Associ- ation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/2020 .repl4nlp-1.24", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training", "authors": [ { "first": "Hangbo", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Wenhui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Songhao", "middle": [], "last": "Piao", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hsiao-Wuen", "middle": [], "last": "Hon", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.12804" ] }, "num": null, "urls": [], "raw_text": "Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, and Hsiao- Wuen Hon. 2020. UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training. arXiv:2002.12804 [cs].", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Analysis Methods in Neural Language Processing: A Survey. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "James", "middle": [], "last": "Glass", "suffix": "" } ], "year": 2019, "venue": "", "volume": "7", "issue": "", "pages": "49--72", "other_ids": { "DOI": [ "10.1162/tacl_a_00254" ] }, "num": null, "urls": [], "raw_text": "Yonatan Belinkov and James Glass. 2019. Anal- ysis Methods in Neural Language Processing: A Survey. Transactions of the Association for Computational Linguistics, 7:49-72. DOI: https://doi.org/10.1162/tacl a 00254", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized Embedding Models", "authors": [ { "first": "Eyal", "middle": [], "last": "Ben-David", "suffix": "" }, { "first": "Carmel", "middle": [], "last": "Rabinovitz", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1162/tacl_a_00328" ], "arXiv": [ "arXiv:2006.09075[cs].DOI" ] }, "num": null, "urls": [], "raw_text": "Eyal Ben-David, Carmel Rabinovitz, and Roi Reichart. 2020. PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextual- ized Embedding Models. arXiv:2006.09075 [cs]. DOI: https://doi.org/10.1162 /tacl a 00328", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings", "authors": [ { "first": "Rishi", "middle": [], "last": "Bommasani", "suffix": "" }, { "first": "Kelly", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4758--4781", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.431" ] }, "num": null, "urls": [], "raw_text": "Rishi Bommasani, Kelly Davis, and Claire Cardie. 2020. Interpreting Pretrained Contex- tualized Representations via Reductions to Static Embeddings. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4758-4781. DOI: https://doi.org/10.18653/v1/2020 .acl-main.431", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Inducing Relational Knowledge from BERT", "authors": [ { "first": "Zied", "middle": [], "last": "Bouraoui", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Schockaert", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6242" ], "arXiv": [ "arXiv:1911.12753" ] }, "num": null, "urls": [], "raw_text": "Zied Bouraoui, Jose Camacho-Collados, and Steven Schockaert. 2019. Inducing Relational Knowledge from BERT. arXiv:1911.12753 [cs]. DOI: https://doi.org/10.1609 /aaai.v34i05.6242", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking", "authors": [ { "first": "Samuel", "middle": [], "last": "Broscheit", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "677--685", "other_ids": { "DOI": [ "10.18653/v1/K19-1063" ] }, "num": null, "urls": [], "raw_text": "Samuel Broscheit. 2019. Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 677-685, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/K19 -1063", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Language Models are Few-Shot Learners", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.14165" ] }, "num": null, "urls": [], "raw_text": "Language Models are Few-Shot Learners. arXiv:2005.14165 [cs].", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "On Identifiability in Transformers", "authors": [ { "first": "Gino", "middle": [], "last": "Brunner", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Damian", "middle": [], "last": "Pascual", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Richter", "suffix": "" }, { "first": "Massimiliano", "middle": [], "last": "Ciaramita", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Wattenhofer", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. 2020. On Identifiability in Transformers. In International Conference on Learning Representations.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", "authors": [ { "first": "Tianlong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Frankle", "suffix": "" }, { "first": "Shiyu", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Sijia", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhangyang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Carbin", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.12223" ] }, "num": null, "urls": [], "raw_text": "Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, and Michael Carbin. 2020. The Lottery Ticket Hypothesis for Pre-trained BERT Networks. arXiv:2007.12223 [cs, stat].", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Symmetric Regularization based BERT for Pair-Wise Semantic Reasoning", "authors": [ { "first": "Xingyi", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Weidi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kunlong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Bi", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Taifeng", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.03405" ] }, "num": null, "urls": [], "raw_text": "Xingyi Cheng, Weidi Xu, Kunlong Chen, Wei Wang, Bin Bi, Ming Yan, Chen Wu, Luo Si, Wei Chu, and Taifeng Wang. 2019. Symmetric Regularization based BERT for Pair-Wise Semantic Reasoning. arXiv:1909.03405 [cs].", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "What Does BERT Look at? An Analysis of BERT's Attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "276--286", "other_ids": { "DOI": [ "10.18653/v1/W19-4828" ], "PMID": [ "31709923" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What Does BERT Look at? An Analysis of BERT's Attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics. DOI: https:// doi.org/10.18653/v1/W19-4828, PMID: 31709923", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "ELECTRA: Pre-Training Text Encoders as Discriminators Rather Than Generators", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre-Training Text Encoders as Discriminators Rather Than Generators. In International Conference on Learning Representations.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "On the use of BERT for Neural Machine Translation", "authors": [ { "first": "Stephane", "middle": [], "last": "Clinchant", "suffix": "" }, { "first": "Kweon Woo", "middle": [], "last": "Jung", "suffix": "" }, { "first": "Vassilina", "middle": [], "last": "Nikoulina", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", "volume": "", "issue": "", "pages": "108--117", "other_ids": { "DOI": [ "10.18653/v1/D19-5611" ] }, "num": null, "urls": [], "raw_text": "Stephane Clinchant, Kweon Woo Jung, and Vassilina Nikoulina. 2019. On the use of BERT for Neural Machine Translation. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 108-117, Hong Kong. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -5611", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Unsupervised Cross-Lingual Representation Learning at Scale", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Kartikay", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Wenzek", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.747" ], "arXiv": [ "arXiv:1911.02116[cs].DOI" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised Cross-Lingual Represen- tation Learning at Scale. arXiv:1911.02116 [cs]. DOI: https://doi.org/10.18653 /v1/2020.acl-main.747", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Adaptively Sparse Transformers", "authors": [ { "first": "M", "middle": [], "last": "Gon\u00e7alo", "suffix": "" }, { "first": "Vlad", "middle": [], "last": "Correia", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Niculae", "suffix": "" }, { "first": "", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2174--2184", "other_ids": { "DOI": [ "10.18653/v1/D19-1223" ] }, "num": null, "urls": [], "raw_text": "Gon\u00e7alo M. Correia, Vlad Niculae, and Andr\u00e9 F. T. Martins. 2019. Adaptively Sparse Trans- formers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2174-2184, Hong Kong, China. Association for Compu- tational Linguistics. DOI: https://doi .org/10.18653/v1/D19-1223", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Questionable Answers in Question Answering Research: Reproducibility and Variability of Published Results", "authors": [ { "first": "Matt", "middle": [], "last": "Crane", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "241--252", "other_ids": { "DOI": [ "10.1162/tacl_a_00018" ] }, "num": null, "urls": [], "raw_text": "Matt Crane. 2018. Questionable Answers in Question Answering Research: Reproducibility and Variability of Published Results. Trans- actions of the Association for Computational Linguistics, 6:241-252. DOI: https://doi org/10.1162/tacl a 00018", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Does BERT Solve Commonsense Task via Commonsense Knowledge?", "authors": [ { "first": "Leyang", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Sijie", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.03945" ] }, "num": null, "urls": [], "raw_text": "Leyang Cui, Sijie Cheng, Yu Wu, and Yue Zhang. 2020. Does BERT Solve Common- sense Task via Commonsense Knowledge? arXiv:2008.03945 [cs].", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Pre-Training with Whole Word Masking for Chinese BERT", "authors": [ { "first": "Yiming", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ziqing", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Shijin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.08101" ] }, "num": null, "urls": [], "raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-Training with Whole Word Masking for Chinese BERT. arXiv:1906.08101 [cs].", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Cracking the Contextual Commonsense Code: Understanding Commonsense Reasoning Aptitude of Deep Contextual Representations", "authors": [ { "first": "Jeff", "middle": [], "last": "Da", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing", "volume": "", "issue": "", "pages": "1--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeff Da and Jungo Kasai. 2019. Cracking the Contextual Commonsense Code: Understand- ing Commonsense Reasoning Aptitude of Deep Contextual Representations. In Proceed- ings of the First Workshop on Commonsense Inference in Natural Language Processing, pages 1-12, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Commonsense Knowledge Mining from Pretrained Models", "authors": [ { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1173--1178", "other_ids": { "DOI": [ "10.18653/v1/D19-1109" ] }, "num": null, "urls": [], "raw_text": "Joe Davison, Joshua Feldman, and Alexander Rush. 2019. Commonsense Knowledge Mining from Pretrained Models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1173-1178, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1109", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre- training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping", "authors": [ { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Ilharco", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.06305" ] }, "num": null, "urls": [], "raw_text": "Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping. arXiv:2002.06305 [cs].", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "When Bert Forgets How To POS: Amnesic Probing of Linguistic Properties and MLM Predictions", "authors": [ { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Jacovi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.00995" ] }, "num": null, "urls": [], "raw_text": "Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2020. When Bert Forgets How To POS: Amnesic Probing of Linguistic Properties and MLM Predictions. arXiv:2006. 00995 [cs].", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings", "authors": [ { "first": "Kawin", "middle": [], "last": "Ethayarajh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "55--65", "other_ids": { "DOI": [ "10.18653/v1/D19-1006" ] }, "num": null, "urls": [], "raw_text": "Kawin Ethayarajh. 2019. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55-65, Hong Kong, China. Associ- ation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1006", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models", "authors": [ { "first": "Allyson", "middle": [], "last": "Ettinger", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1162/tacl_a_00298" ], "arXiv": [ "arXiv:1907.13528[cs].DOI" ] }, "num": null, "urls": [], "raw_text": "Allyson Ettinger. 2019. What BERT is not: Lessons from a new suite of psycholinguis- tic diagnostics for language models. arXiv: 1907.13528 [cs]. DOI: https://doi.org /10.1162/tacl a 00298", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Reducing Transformer Depth on Demand with Structured Dropout", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing Transformer Depth on Demand with Structured Dropout. In International Conference on Learning Representations.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Do Neural Language Representations Learn Physical Commonsense?", "authors": [ { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 41st Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2019. Do Neural Language Representations Learn Physical Commonsense? In Proceedings of the 41st Annual Conference of the Cognitive Science Society (CogSci 2019), page 7.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks", "authors": [ { "first": "Jonathan", "middle": [], "last": "Frankle", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Carbin", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Frankle and Michael Carbin. 2019. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In International Conference on Learning Representations.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Compressing large-scale transformer-based models: A case study on BERT", "authors": [ { "first": "Marianne", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Winslett", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.11985" ] }, "num": null, "urls": [], "raw_text": "Chen, Marianne Winslett, Hassan Sajjad, and Preslav Nakov. 2020. Compressing large-scale transformer-based models: A case study on BERT. arXiv preprint arXiv:2002.11985.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection", "authors": [ { "first": "Siddhant", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Thuy", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6282" ] }, "num": null, "urls": [], "raw_text": "Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2020. TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection. In AAAI. DOI: https:// doi.org/10.1609/aaai.v34i05.6282", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Span Selection Pre-training for Question Answering", "authors": [ { "first": "Michael", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Alfio", "middle": [], "last": "Gliozzo", "suffix": "" }, { "first": "Rishav", "middle": [], "last": "Chakravarti", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Ferritto", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Pan", "suffix": "" }, { "first": "G", "middle": [ "P" ], "last": "Shrivatsa Bhargav", "suffix": "" }, { "first": "Dinesh", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Avi", "middle": [], "last": "Sil", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2773--2782", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.247" ] }, "num": null, "urls": [], "raw_text": "Michael Glass, Alfio Gliozzo, Rishav Chakravarti, Anthony Ferritto, Lin Pan, G.P. Shrivatsa Bhargav, Dinesh Garg, and Avi Sil. 2020. Span Selection Pre-training for Question Answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2773-2782, Online. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/2020 .acl-main.247", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.06788" ] }, "num": null, "urls": [], "raw_text": "Goran Glava\u0161 and Ivan Vuli\u0107. 2020. Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation. arXiv:2008.06788 [cs].", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Constructions at Work: The Nature of Generalization in Language", "authors": [ { "first": "Adele", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adele Goldberg. 2006. Constructions at Work: The Nature of Generalization in Language, Oxford University Press, USA.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Assessing BERT's syntactic abilities", "authors": [ { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.05287" ] }, "num": null, "urls": [], "raw_text": "Yoav Goldberg. 2019. Assessing BERT's syntac- tic abilities. arXiv preprint arXiv:1901.05287.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Efficient training of BERT by progressively stacking", "authors": [ { "first": "Linyuan", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Di", "middle": [], "last": "He", "suffix": "" }, { "first": "Zhuohan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tieyan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "2337--2346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei Wang, and Tieyan Liu. 2019. Efficient training of BERT by progressively stacking. In International Conference on Machine Learning, pages 2337-2346.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Compressing BERT: Studying the effects of weight pruning on transfer learning", "authors": [ { "first": "Mitchell", "middle": [ "A" ], "last": "Gordon", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Andrews", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.08307" ] }, "num": null, "urls": [], "raw_text": "Mitchell A. Gordon, Kevin Duh, and Nicholas Andrews. 2020. Compressing BERT: Studying the effects of weight pruning on transfer learning. arXiv preprint arXiv:2002.08307.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Yogish Sabharwal, and Ashish Verma. 2020. Power-bert: Accelerating BERT inference for classification tasks", "authors": [ { "first": "Saurabh", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Anamitra", "middle": [], "last": "Roy Choudhary", "suffix": "" }, { "first": "Venkatesan", "middle": [], "last": "Chakaravarthy", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Manishraje", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.08950" ] }, "num": null, "urls": [], "raw_text": "Saurabh Goyal, Anamitra Roy Choudhary, Venkatesan Chakaravarthy, Saurabh ManishRaje, Yogish Sabharwal, and Ashish Verma. 2020. Power-bert: Accelerating BERT inference for classification tasks. arXiv preprint arXiv:2001. 08950.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Reweighted Proximal Pruning for Large-Scale Language Representation", "authors": [ { "first": "Sijia", "middle": [], "last": "Fu-Ming Guo", "suffix": "" }, { "first": "Finlay", "middle": [ "S" ], "last": "Liu", "suffix": "" }, { "first": "Xue", "middle": [], "last": "Mungall", "suffix": "" }, { "first": "Yanzhi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.12486" ] }, "num": null, "urls": [], "raw_text": "Fu-Ming Guo, Sijia Liu, Finlay S. Mungall, Xue Lin, and Yanzhi Wang. 2019. Reweighted Proximal Pruning for Large-Scale Language Representation. arXiv:1909.12486 [cs, stat].", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "REALM: Retrieval-Augmented Language Model Pre-Training", "authors": [ { "first": "Kelvin", "middle": [], "last": "Guu", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Zora", "middle": [], "last": "Tung", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.08909" ] }, "num": null, "urls": [], "raw_text": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrieval-Augmented Language Model Pre- Training. arXiv:2002.08909 [cs].", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Visualizing and Understanding the Effectiveness of BERT", "authors": [ { "first": "Yaru", "middle": [], "last": "Hao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ke", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4143--4152", "other_ids": { "DOI": [ "10.18653/v1/D19-1424" ] }, "num": null, "urls": [], "raw_text": "Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visualizing and Understanding the Effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4143-4152, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1424", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "A Structural Probe for Finding Syntax in Word Representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A Structural Probe for Finding Syntax in Word Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Distilling the Knowledge in a Neural Network", "authors": [ { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2014, "venue": "Deep Learning and Representation Learning Workshop: NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2014. Distilling the Knowledge in a Neural Network. In Deep Learning and Representation Learning Workshop: NIPS 2014.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models", "authors": [ { "first": "Benjamin", "middle": [], "last": "Hoover", "suffix": "" }, { "first": "Hendrik", "middle": [], "last": "Strobelt", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-demos.22" ], "arXiv": [ "arXiv:1910.05276[cs].DOI" ] }, "num": null, "urls": [], "raw_text": "Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2019. exBERT: A Visual Analysis Tool to Explore Learned Represen- tations in Transformers Models. arXiv:1910. 05276 [cs]. DOI: https://doi.org/10 .18653/v1/2020.acl-demos.22", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Parameter-Efficient Transfer Learning for NLP", "authors": [ { "first": "Neil", "middle": [], "last": "Houlsby", "suffix": "" }, { "first": "Andrei", "middle": [], "last": "Giurgiu", "suffix": "" }, { "first": "Stanislaw", "middle": [], "last": "Jastrzebski", "suffix": "" }, { "first": "Bruna", "middle": [], "last": "Morrone", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "De Laroussilhe", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Gesmundo", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Attariyan", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Gelly", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.00751" ] }, "num": null, "urls": [], "raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter- Efficient Transfer Learning for NLP. arXiv: 1902.00751 [cs, stat].", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Do attention heads in BERT track syntactic dependencies? arXiv preprint", "authors": [ { "first": "Jason", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bordia", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.12246" ] }, "num": null, "urls": [], "raw_text": "Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R. Bowman. 2019. Do attention heads in BERT track syntactic dependencies? arXiv preprint arXiv:1911.12246.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Attention is not Explanation", "authors": [ { "first": "Sarthak", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3543--3556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543-3556.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "What does BERT learn about the structure of language?", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Unicomb", "suffix": "" }, { "first": "Gerardo", "middle": [], "last": "I\u00f1iguez", "suffix": "" }, { "first": "M\u00e1rton", "middle": [], "last": "Karsai", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "L\u00e9o", "suffix": "" }, { "first": "M\u00e1rton", "middle": [], "last": "Karsai", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Sarraute", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Fleury", "suffix": "" } ], "year": 2019, "venue": "57th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "19--1356", "other_ids": { "DOI": [ "10.18653/v1/P19-1356" ] }, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, Djam\u00e9 Seddah, Samuel Unicomb, Gerardo I\u00f1iguez, M\u00e1rton Karsai, Yannick L\u00e9o, M\u00e1rton Karsai, Carlos Sarraute,\u00c9ric Fleury, et al. 2019. What does BERT learn about the structure of language? In 57th Annual Meeting of the Association for Computational Linguistics (ACL), Florence, Italy. DOI: https://doi.org/10.18653 /v1/P19-1356", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization", "authors": [ { "first": "Haoming", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Tuo", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.197" ], "arXiv": [ "arXiv:1911.03437" ], "PMID": [ "33121726" ] }, "num": null, "urls": [], "raw_text": "Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2019a. SMART: Robust and Efficient Fine- Tuning for Pre-trained Natural Language Models through Principled Regularized Opti- mization. arXiv preprint arXiv:1911.03437. DOI: https://doi.org/10.18653/v1 /2020.acl-main.197, PMID: 33121726, PMCID: PMC7218724", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "How Can We Know What Language Models Know?", "authors": [ { "first": "Zhengbao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Frank", "middle": [ "F" ], "last": "Xu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1162/tacl_a_00324" ], "arXiv": [ "arXiv:1911.12543[cs].DOI" ] }, "num": null, "urls": [], "raw_text": "Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2019b. How Can We Know What Language Models Know? arXiv:1911. 12543 [cs]. DOI: https://doi.org/10 .1162/tacl a 00324", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "TinyBERT: Distilling BERT for natural language understanding", "authors": [ { "first": "Xiaoqi", "middle": [], "last": "Jiao", "suffix": "" }, { "first": "Yichun", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Lifeng", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Linlin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Fang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.10351" ] }, "num": null, "urls": [], "raw_text": "Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for natural language understanding. arXiv preprint arXiv:1909.10351.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment", "authors": [ { "first": "Di", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Zhijing", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Joey", "middle": [ "Tianyi" ], "last": "Zhou", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2020, "venue": "AAAI 2020", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6311" ] }, "num": null, "urls": [], "raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. In AAAI 2020. DOI: https://doi.org/10.1609 /aaai.v34i05.6311", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "SpanBERT: Improving Pre-Training by Representing and Predicting Spans", "authors": [ { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "64--77", "other_ids": { "DOI": [ "10.1162/tacl_a_00300" ] }, "num": null, "urls": [], "raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving Pre-Training by Representing and Predicting Spans. Transactions of the Association for Computational Linguistics, 8:64-77. DOI: https://doi.org/10.1162/tacl a 00300", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Further boosting BERT-based models by duplicating existing layers: Some intriguing phenomena inside BERT", "authors": [ { "first": "Wei-Tsung", "middle": [], "last": "Kao", "suffix": "" }, { "first": "Tsung-Han", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Po-Han", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Chun-Cheng", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "Hung-Yi", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.09309" ] }, "num": null, "urls": [], "raw_text": "Wei-Tsung Kao, Tsung-Han Wu, Po-Han Chi, Chun-Cheng Hsieh, and Hung-Yi Lee. 2020. Further boosting BERT-based models by duplicating existing layers: Some intriguing phenomena inside BERT. arXiv preprint arXiv:2001.09309.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Are pre-trained language models aware of phrases? simple but strong baselines for grammar induction", "authors": [ { "first": "Taeuk", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jihun", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Edmiston", "suffix": "" }, { "first": "Sang-Goo", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2020, "venue": "", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taeuk Kim, Jihun Choi, Daniel Edmiston, and Sang-goo Lee. 2020. Are pre-trained language models aware of phrases? simple but strong baselines for grammar induction. In ICLR 2020.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Attention Module is Not Only a Weight: Analyzing Transformers with Vector Norms", "authors": [ { "first": "Goro", "middle": [], "last": "Kobayashi", "suffix": "" }, { "first": "Tatsuki", "middle": [], "last": "Kuribayashi", "suffix": "" }, { "first": "Sho", "middle": [], "last": "Yokoi", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.10102" ] }, "num": null, "urls": [], "raw_text": "Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention Module is Not Only a Weight: Analyzing Transformers with Vector Norms. arXiv:2004.10102 [cs].", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "75 Languages, 1 Model: Parsing Universal Dependencies Universally", "authors": [ { "first": "Dan", "middle": [], "last": "Kondratyuk", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2779--2795", "other_ids": { "DOI": [ "10.18653/v1/D19-1279" ] }, "num": null, "urls": [], "raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 Languages, 1 Model: Parsing Universal Depen- dencies Universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2779-2795, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1279", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "A mutual information maximization perspective of language representation learning", "authors": [ { "first": "Lingpeng", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Cyprien", "middle": [], "last": "De Masson D'autume", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Wang", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lingpeng Kong, Cyprien de Masson d'Autume, Lei Yu, Wang Ling, Zihang Dai, and Dani Yogatama. 2019. A mutual information max- imization perspective of language representa- tion learning. In International Conference on Learning Representations.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Association for Computational Linguistics", "authors": [ { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Romanov", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4356--4365", "other_ids": { "DOI": [ "10.18653/v1/D19-1445" ] }, "num": null, "urls": [], "raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4356-4365, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1445", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Nicolas Papernot, and Mohit Iyyer. 2020. Thieves on Sesame Street! Model Extraction of BERT-Based APIs", "authors": [ { "first": "Kalpesh", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Gaurav", "middle": [], "last": "Singh Tomar", "suffix": "" }, { "first": "Ankur", "middle": [ "P" ], "last": "Parikh", "suffix": "" } ], "year": null, "venue": "", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kalpesh Krishna, Gaurav Singh Tomar, Ankur P. Parikh, Nicolas Papernot, and Mohit Iyyer. 2020. Thieves on Sesame Street! Model Extraction of BERT-Based APIs. In ICLR 2020.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Data Augmentation using Pre-Trained Transformer Models", "authors": [ { "first": "Varun", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Ashutosh", "middle": [], "last": "Choudhary", "suffix": "" }, { "first": "Eunah", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.02245" ] }, "num": null, "urls": [], "raw_text": "Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data Augmentation using Pre- Trained Transformer Models. arXiv:2003. 02245 [cs].", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "A Matter of Framing: The Impact of Linguistic Formalism on Probing Results", "authors": [ { "first": "Ilia", "middle": [], "last": "Kuznetsov", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.14999" ] }, "num": null, "urls": [], "raw_text": "Ilia Kuznetsov and Iryna Gurevych. 2020. A Matter of Framing: The Impact of Linguistic Formalism on Probing Results. arXiv:2004. 14999 [cs].", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Cross-Lingual Language Model Pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.07291" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross-Lingual Language Model Pretraining. arXiv:1901.07291 [cs].", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020a. ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations. In ICLR.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Mixout: Effective regularization to finetune large-scale pretrained language models", "authors": [ { "first": "Cheolhyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Wanmo", "middle": [], "last": "Kang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11299" ] }, "num": null, "urls": [], "raw_text": "Cheolhyoung Lee, Kyunghyun Cho, and Wanmo Kang. 2019. Mixout: Effective regularization to finetune large-scale pretrained language models. arXiv preprint arXiv:1909.11299.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "BART: Denoising Sequence-to-Sequence Pre-Training for Natural Language Generation, Translation, and Comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Abdelrahman", "middle": [], "last": "Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ves", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ], "arXiv": [ "arXiv:1910.13461" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising Sequence-to-Sequence Pre-Training for Natural Language Genera- tion, Translation, and Comprehension. arXiv: 1910.13461 [cs, stat]. DOI: https://doi .org/10.18653/v1/2020.acl-main.703", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Transformers to Learn Hierarchical Contexts in Multiparty Dialogue for Span-based Question Answering", "authors": [ { "first": "Changmao", "middle": [], "last": "Li", "suffix": "" }, { "first": "D", "middle": [], "last": "Jinho", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5709--5714", "other_ids": {}, "num": null, "urls": [], "raw_text": "Changmao Li and Jinho D. Choi. 2020. Trans- formers to Learn Hierarchical Contexts in Multiparty Dialogue for Span-based Question Answering. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 5709-5714, Online. Association for Computational Linguistics.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "Train large, then compress: Rethinking model size for efficient training and inference of transformers", "authors": [ { "first": "Zhuohan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Keutzer", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Joseph", "middle": [ "E" ], "last": "Gonzalez", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.11794" ] }, "num": null, "urls": [], "raw_text": "Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joseph E. Gonzalez. 2020. Train large, then compress: Rethinking model size for efficient training and inference of transformers. arXiv preprint arXiv:2002.11794.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Open Sesame: Getting inside BERT's Linguistic Knowledge", "authors": [ { "first": "Yongjie", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "241--253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open Sesame: Getting inside BERT's Linguistic Knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241-253.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "Linguistic Knowledge and Transferability of", "authors": [ { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic Knowledge and Transferability of", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "1073--1094", "other_ids": {}, "num": null, "urls": [], "raw_text": "Contextual Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A Robustly Opti- mized BERT Pretraining Approach. arXiv: 1907.11692 [cs].", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Universal Text Representation from BERT: An Empirical Study", "authors": [ { "first": "Xiaofei", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Ng", "suffix": "" }, { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.07973" ] }, "num": null, "urls": [], "raw_text": "Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh Nallapati, and Bing Xiang. 2019. Universal Text Representation from BERT: An Empirical Study. arXiv:1910.07973 [cs].", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Emergent linguistic structure in artificial neural networks trained by self-supervision", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the National Academy of Sciences", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1073/pnas.1907367117" ], "PMID": [ "32493748" ] }, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy. 2020. Emergent linguistic structure in artificial neural networks trained by self-supervision. Proceedings of the National Academy of Sci- ences, page 201907367. DOI: https:// doi.org/10.1073/pnas.1907367117, PMID: 32493748", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "On Measuring Social Biases in Sentence Encoders", "authors": [ { "first": "Chandler", "middle": [], "last": "May", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Bordia", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "622--628", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On Measuring Social Biases in Sentence Encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Structured Pruning of a BERT-based Question Answering Model", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Mccarley", "suffix": "" }, { "first": "Rishav", "middle": [], "last": "Chakravarti", "suffix": "" }, { "first": "Avirup", "middle": [], "last": "Sil", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.06360" ] }, "num": null, "urls": [], "raw_text": "J. S. McCarley, Rishav Chakravarti, and Avirup Sil. 2020. Structured Pruning of a BERT-based Question Answering Model. arXiv:1910.06360 [cs].", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "RNNs implicitly implement tensor-product representations", "authors": [ { "first": "R", "middle": [], "last": "", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Dunbar", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Smolensky", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and Paul Smolensky. 2019a. RNNs implicitly implement tensor-product representations. In International Conference on Learning Representations.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference", "authors": [ { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "19--1334", "other_ids": { "DOI": [ "10.18653/v1/P19-1334" ] }, "num": null, "urls": [], "raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019b. Right for the Wrong Reasons: Diag- nosing Syntactic Heuristics in Natural Lan- guage Inference. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Linguistics. DOI: https://doi.org/10 .18653/v1/P19-1334", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "Contextual and Non-Contextual Word Embeddings: An in-depth Linguistic Investigation", "authors": [ { "first": "Alessio", "middle": [], "last": "Miaschi", "suffix": "" }, { "first": "Felice", "middle": [], "last": "Dell'orletta", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "110--119", "other_ids": { "DOI": [ "10.18653/v1/2020.repl4nlp-1.15" ] }, "num": null, "urls": [], "raw_text": "Alessio Miaschi and Felice Dell'Orletta. 2020. Contextual and Non-Contextual Word Embed- dings: An in-depth Linguistic Investigation. In Proceedings of the 5th Workshop on Represen- tation Learning for NLP, pages 110-119. DOI: https://doi.org/10.18653/v1/2020 .repl4nlp-1.15", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Are Sixteen Heads Really Better than One?", "authors": [ { "first": "Paul", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Michel, Omer Levy, and Graham Neubig. 2019. Are Sixteen Heads Really Better than One? Advances in Neural Information Process- ing Systems 32 (NIPS 2019).", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "What do you mean", "authors": [ { "first": "Timothee", "middle": [], "last": "Mickus", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Paperno", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Kees", "middle": [], "last": "Van Deemeter", "suffix": "" } ], "year": 2019, "venue": "BERT? assessing BERT as a distributional semantics model", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.05758" ] }, "num": null, "urls": [], "raw_text": "Timothee Mickus, Denis Paperno, Mathieu Constant, and Kees van Deemeter. 2019. What do you mean, BERT? assessing BERT as a distributional semantics model. arXiv preprint arXiv:1911.05758.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Turing-NLG: A 17-billionparameter language model by microsoft", "authors": [ { "first": "", "middle": [], "last": "Microsoft", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Microsoft. 2020. Turing-NLG: A 17-billion- parameter language model by microsoft.", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems 26 (NIPS 2013)", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 3111-3119.", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "All-butthe-top: Simple and effective postprocessing for word representations", "authors": [ { "first": "Jiaqi", "middle": [], "last": "Mu", "suffix": "" }, { "first": "Pramod", "middle": [], "last": "Viswanath", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaqi Mu and Pramod Viswanath. 2018. All-but- the-top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations.", "links": null }, "BIBREF95": { "ref_id": "b95", "title": "Probing Neural Network Comprehension of Natural Language Arguments", "authors": [ { "first": "Timothy", "middle": [], "last": "Niven", "suffix": "" }, { "first": "Hung-Yu", "middle": [], "last": "Kao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4658--4664", "other_ids": { "DOI": [ "10.18653/v1/P19-1459" ] }, "num": null, "urls": [], "raw_text": "Timothy Niven and Hung-Yu Kao. 2019. Probing Neural Network Comprehension of Natural Language Arguments. In Pro- ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658-4664, Florence, Italy. Associ- ation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/P19 -1459", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Knowledge Enhanced Contextual Word Representations", "authors": [ { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "43--54", "other_ids": { "DOI": [ "10.18653/v1/D19-1005" ], "PMID": [ "31383442" ] }, "num": null, "urls": [], "raw_text": "Noah A. Smith. 2019a. Knowledge Enhanced Contextual Word Representations. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), pages 43-54, Hong Kong, China. Associ- ation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1005, PMID: 31383442", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks", "authors": [ { "first": "Matthew", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/W19-4302" ] }, "num": null, "urls": [], "raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019b. To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14, Florence, Italy. Association for Computational Lin- guistics. DOI: https://doi.org/10.18653 /v1/W19-4302, PMCID: PMC6351953", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Association for Computational Linguistics", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Bakhtin", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2463--2473", "other_ids": { "DOI": [ "10.18653/v1/D19-1250" ] }, "num": null, "urls": [], "raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language Models as Knowledge Bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1250", "links": null }, "BIBREF100": { "ref_id": "b100", "title": "Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-Data Tasks", "authors": [ { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "F\u00e9vry", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.01088" ] }, "num": null, "urls": [], "raw_text": "Jason Phang, Thibault F\u00e9vry, and Samuel R. Bowman. 2019. Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-Data Tasks. arXiv:1811.01088 [cs].", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Theoretic Probing for Linguistic Structure", "authors": [ { "first": "Tiago", "middle": [], "last": "Pimentel", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Valvoda", "suffix": "" }, { "first": "Rowan", "middle": [], "last": "Hall Maudslay", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Zmigrod", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Cotterell", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.420" ], "arXiv": [ "arXiv:2004.03061[cs].DOI" ] }, "num": null, "urls": [], "raw_text": "Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-Theoretic Probing for Linguistic Structure. arXiv:2004. 03061 [cs]. DOI: https://doi.org/10 .18653/v1/2020.acl-main.420", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "BERT is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa", "authors": [ { "first": "Nina", "middle": [], "last": "Poerner", "suffix": "" }, { "first": "Ulli", "middle": [], "last": "Waltinger", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03681" ] }, "num": null, "urls": [], "raw_text": "Nina Poerner, Ulli Waltinger, and Hinrich Sch\u00fctze. 2019. BERT is not a knowledge base (yet): Factual knowledge vs. name-based rea- soning in unsupervised qa. arXiv preprint arXiv: 1911.03681.", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "When BERT Plays the Lottery, All Tickets Are Winning", "authors": [ { "first": "Sai", "middle": [], "last": "Prasanna", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sai Prasanna, Anna Rogers, and Anna Rumshisky. 2020. When BERT Plays the Lottery, All Tickets Are Winning. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing. Online. Association for Computational Linguistics.", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Improving Transformer Models by Reordering their Sublayers", "authors": [ { "first": "Ofir", "middle": [], "last": "Press", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2996--3005", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.270" ] }, "num": null, "urls": [], "raw_text": "Ofir Press, Noah A. Smith, and Omer Levy. 2020. Improving Transformer Models by Re- ordering their Sublayers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2996-3005, Online. Association for Computational Lin- guistics. DOI: https://doi.org/10.18653 /v1/2020.acl-main.270", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work?", "authors": [ { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaoyi", "middle": [], "last": "Phu Mon Htut", "suffix": "" }, { "first": "Richard", "middle": [ "Yuanzhe" ], "last": "Zhang", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Vania", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Kann", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5231--5247", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.467" ] }, "num": null, "urls": [], "raw_text": "Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-Task Transfer Learning with Pretrained Language Models: When and Why Does It Work? In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5231-5247, Online. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/2020 .acl-main.467", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.10683" ] }, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv:1910.10683 [cs, stat].", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Scherrer", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.10260" ] }, "num": null, "urls": [], "raw_text": "Alessandro Raganato, Yves Scherrer, and J\u00f6rg Tiedemann. 2020. Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Trans- lation. arXiv:2002.10260 [cs].", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "An Analysis of Encoder Representations in Transformer-Based Machine Translation", "authors": [ { "first": "Alessandro", "middle": [], "last": "Raganato", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "287--297", "other_ids": { "DOI": [ "10.18653/v1/W18-5431" ] }, "num": null, "urls": [], "raw_text": "Alessandro Raganato and J\u00f6rg Tiedemann. 2018. An Analysis of Encoder Representations in Transformer-Based Machine Translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287-297, Brussels, Belgium. Association for Computa- tional Linguistics. DOI: https://doi.org /10.18653/v1/W18-5431", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "Beyond Accuracy: Behavioral Testing of NLP Models with CheckList", "authors": [ { "first": "Tongshuang", "middle": [], "last": "Marco Tulio Ribeiro", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Guestrin", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4902--4912", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 4902-4912,", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Association for Computational Linguistics", "authors": [ { "first": "", "middle": [], "last": "Online", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.442" ] }, "num": null, "urls": [], "raw_text": "Online. Association for Computational Lin- guistics. DOI: https://doi.org/10.18653 /v1/2020.acl-main.442", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Probing Natural Language Inference Models through Semantic Fragments", "authors": [ { "first": "Kyle", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Lawrence", "middle": [ "S" ], "last": "Moss", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2020, "venue": "AAAI 2020", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6397" ] }, "num": null, "urls": [], "raw_text": "Kyle Richardson, Hai Hu, Lawrence S. Moss, and Ashish Sabharwal. 2020. Probing Natural Language Inference Models through Semantic Fragments. In AAAI 2020. DOI: https:// doi.org/10.1609/aaai.v34i05.6397", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge", "authors": [ { "first": "Kyle", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Sabharwal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1162/tacl_a_00331" ], "arXiv": [ "arXiv:1912.13337[cs].DOI" ] }, "num": null, "urls": [], "raw_text": "Kyle Richardson and Ashish Sabharwal. 2019. What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge. arXiv:1912.13337 [cs]. DOI: https://doi .org/10.1162/tacl a 00331", "links": null }, "BIBREF113": { "ref_id": "b113", "title": "Colin Raffel, and Noam Shazeer. 2020. How Much Knowledge Can You Pack Into the Parameters of a Language Model? arXiv preprint", "authors": [ { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.08910" ] }, "num": null, "urls": [], "raw_text": "Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How Much Knowledge Can You Pack Into the Parameters of a Language Model? arXiv preprint arXiv:2002.08910.", "links": null }, "BIBREF114": { "ref_id": "b114", "title": "Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Downey", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6398" ] }, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. 2020. Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks. In AAAI, page 11. DOI: https://doi.org/10.1609/aaai .v34i05.6398", "links": null }, "BIBREF115": { "ref_id": "b115", "title": "Inducing syntactic trees from BERT representations", "authors": [ { "first": "Rudolf", "middle": [], "last": "Rosa", "suffix": "" }, { "first": "David", "middle": [], "last": "Mare\u010dek", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.11511" ] }, "num": null, "urls": [], "raw_text": "Rudolf Rosa and David Mare\u010dek. 2019. Inducing syntactic trees from BERT representations. arXiv preprint arXiv:1906.11511.", "links": null }, "BIBREF116": { "ref_id": "b116", "title": "DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "5th Workshop on Energy Efficient Machine Learning and Cognitive Computing -NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. In 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS 2019.", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "Movement Pruning: Adaptive Sparsity by Fine-Tuning", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.07683" ] }, "num": null, "urls": [], "raw_text": "Victor Sanh, Thomas Wolf, and Alexander M. Rush. 2020. Movement Pruning: Adaptive Sparsity by Fine-Tuning. arXiv:2005.07683 [cs].", "links": null }, "BIBREF118": { "ref_id": "b118", "title": "BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance", "authors": [ { "first": "Timo", "middle": [], "last": "Schick", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3996--4007", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.368" ] }, "num": null, "urls": [], "raw_text": "Timo Schick and Hinrich Sch\u00fctze. 2020. BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Per- formance. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3996-4007, Online. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/2020 .acl-main.368", "links": null }, "BIBREF119": { "ref_id": "b119", "title": "BERT as a Teacher: Contextual Embeddings for Sequence-Level Reward", "authors": [ { "first": "Florian", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.02738" ] }, "num": null, "urls": [], "raw_text": "Florian Schmidt and Thomas Hofmann. 2020. BERT as a Teacher: Contextual Embeddings for Sequence-Level Reward. arXiv preprint arXiv:2003.02738.", "links": null }, "BIBREF120": { "ref_id": "b120", "title": "Green AI", "authors": [ { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.10597" ] }, "num": null, "urls": [], "raw_text": "Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2019. Green AI. arXiv: 1907.10597 [cs, stat].", "links": null }, "BIBREF121": { "ref_id": "b121", "title": "Is Attention Interpretable", "authors": [ { "first": "Sofia", "middle": [], "last": "Serrano", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P19-1282" ], "arXiv": [ "arXiv:1906.03731" ] }, "num": null, "urls": [], "raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is Attention Interpretable? arXiv:1906.03731 [cs]. DOI: https://doi.org/10.18653 /v1/P19-1282", "links": null }, "BIBREF123": { "ref_id": "b123", "title": "Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT", "authors": [ { "first": "Kurt", "middle": [], "last": "Mahoney", "suffix": "" }, { "first": "", "middle": [], "last": "Keutzer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6409" ], "arXiv": [ "arXiv:1909.05840" ] }, "num": null, "urls": [], "raw_text": "Mahoney, and Kurt Keutzer. 2019. Q-BERT: Hessian Based Ultra Low Precision Quanti- zation of BERT. arXiv preprint arXiv:1909. 05840. DOI: https://doi.org/10.1609 /aaai.v34i05.6409", "links": null }, "BIBREF124": { "ref_id": "b124", "title": "What does BERT Learn from Multiple-Choice Reading Comprehension Datasets?", "authors": [ { "first": "Chenglei", "middle": [], "last": "Si", "suffix": "" }, { "first": "Shuohang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.12391" ] }, "num": null, "urls": [], "raw_text": "Chenglei Si, Shuohang Wang, Min-Yen Kan, and Jing Jiang. 2019. What does BERT Learn from Multiple-Choice Reading Comprehension Datasets? arXiv:1910.12391 [cs].", "links": null }, "BIBREF125": { "ref_id": "b125", "title": "MPNet: Masked and Permuted Pre-training for Language Understanding", "authors": [ { "first": "Kaitao", "middle": [], "last": "Song", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.09297" ] }, "num": null, "urls": [], "raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2020. MPNet: Masked and Per- muted Pre-training for Language Understand- ing. arXiv:2004.09297 [cs].", "links": null }, "BIBREF126": { "ref_id": "b126", "title": "BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning", "authors": [ { "first": "Asa", "middle": [ "Cooper" ], "last": "Stickland", "suffix": "" }, { "first": "Iain", "middle": [], "last": "Murray", "suffix": "" } ], "year": 2019, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "5986--5995", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asa Cooper Stickland and Iain Murray. 2019. BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learn- ing. In International Conference on Machine Learning, pages 5986-5995.", "links": null }, "BIBREF127": { "ref_id": "b127", "title": "Energy and Policy Considerations for Deep Learning in NLP", "authors": [ { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Ananya", "middle": [], "last": "Ganesh", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and Policy Consider- ations for Deep Learning in NLP. In ACL 2019.", "links": null }, "BIBREF128": { "ref_id": "b128", "title": "SesameBERT: Attention for Anywhere", "authors": [ { "first": "Ta-Chun", "middle": [], "last": "Su", "suffix": "" }, { "first": "Hsiang-Chih", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.03176" ] }, "num": null, "urls": [], "raw_text": "Ta-Chun Su and Hsiang-Chih Cheng. 2019. SesameBERT: Attention for Anywhere. arXiv: 1910.03176 [cs].", "links": null }, "BIBREF129": { "ref_id": "b129", "title": "Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets", "authors": [ { "first": "Saku", "middle": [], "last": "Sugawara", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Aizawa", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6422" ] }, "num": null, "urls": [], "raw_text": "Saku Sugawara, Pontus Stenetorp, Kentaro Inui, and Akiko Aizawa. 2020. Assessing the Bench- marking Capacity of Machine Reading Com- prehension Datasets. In AAAI. DOI: https:// doi.org/10.1609/aaai.v34i05.6422", "links": null }, "BIBREF130": { "ref_id": "b130", "title": "Patient Knowledge Distillation for BERT Model Compression", "authors": [ { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4314--4323", "other_ids": { "DOI": [ "10.18653/v1/D19-1441" ] }, "num": null, "urls": [], "raw_text": "Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019a. Patient Knowledge Distillation for BERT Model Compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4314-4323. DOI: https://doi.org /10.18653/v1/D19-1441", "links": null }, "BIBREF131": { "ref_id": "b131", "title": "ERNIE: Enhanced Representation through Knowledge Integration", "authors": [ { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Xuyi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Danxiang", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.09223" ] }, "num": null, "urls": [], "raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019b. ERNIE: Enhanced Representation through Knowledge Integration. arXiv:1904.09223 [cs].", "links": null }, "BIBREF132": { "ref_id": "b132", "title": "ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding", "authors": [ { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Shuohuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yukun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6428" ], "arXiv": [ "arXiv:1907.12412[cs].DOI" ] }, "num": null, "urls": [], "raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019c. ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding. arXiv:1907.12412 [cs]. DOI: https://doi .org/10.1609/aaai.v34i05.6428", "links": null }, "BIBREF133": { "ref_id": "b133", "title": "MobileBERT: Task-Agnostic Compression of BERT for Resource Limited Devices", "authors": [ { "first": "Zhiqing", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hongkun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Renjie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Denny", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 2020. MobileBERT: Task-Agnostic Compression of BERT for Resource Limited Devices.", "links": null }, "BIBREF134": { "ref_id": "b134", "title": "Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding", "authors": [ { "first": "Dhanasekar", "middle": [], "last": "Sundararaman", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shijing", "middle": [], "last": "Si", "suffix": "" }, { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.06156" ] }, "num": null, "urls": [], "raw_text": "Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Shijing Si, Dinghan Shen, Dong Wang, and Lawrence Carin. 2019. Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding. arXiv:1911.06156 [cs, stat].", "links": null }, "BIBREF136": { "ref_id": "b136", "title": "oLMpics -On what Language Model Pre-Training Captures", "authors": [ { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Yanai", "middle": [], "last": "Elazar", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.13283" ] }, "num": null, "urls": [], "raw_text": "Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. oLMpics -On what Language Model Pre-Training Captures. arXiv:1912.13283 [cs].", "links": null }, "BIBREF137": { "ref_id": "b137", "title": "Document Classification by Word Embeddings of BERT", "authors": [ { "first": "Hirotaka", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Hiroyuki", "middle": [], "last": "Shinnou", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2020, "venue": "Computational Linguistics, Communications in Computer and Information Science", "volume": "", "issue": "", "pages": "145--154", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirotaka Tanaka, Hiroyuki Shinnou, Rui Cao, Jing Bai, and Wen Ma. 2020. Document Classification by Word Embeddings of BERT. In Computational Linguistics, Communica- tions in Computer and Information Science, pages 145-154, Singapore, Springer.", "links": null }, "BIBREF138": { "ref_id": "b138", "title": "Distilling Task-Specific Knowledge from BERT into Simple Neural Networks", "authors": [ { "first": "Raphael", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Linqing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Vechtomova", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.12136" ] }, "num": null, "urls": [], "raw_text": "Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling Task-Specific Knowledge from BERT into Simple Neural Networks. arXiv preprint arXiv:1903.12136.", "links": null }, "BIBREF139": { "ref_id": "b139", "title": "BERT Rediscovers the Classical NLP Pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT Rediscovers the Classical NLP Pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 4593-4601. DOI: https://doi.org/10.18653/v1/P19 -1452", "links": null }, "BIBREF140": { "ref_id": "b140", "title": "What do you learn from context? Probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "R", "middle": [ "Thomas" ], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sentence structure in contextu- alized word representations. In International Conference on Learning Representations.", "links": null }, "BIBREF141": { "ref_id": "b141", "title": "WaLDORf: Wasteless Language-model Distillation On Reading-comprehension", "authors": [ { "first": "James", "middle": [], "last": "Yi Tian", "suffix": "" }, { "first": "Alexander", "middle": [ "P" ], "last": "Kreuzer", "suffix": "" }, { "first": "Pai-Hung", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hans-Martin", "middle": [], "last": "Will", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.06638" ] }, "num": null, "urls": [], "raw_text": "James Yi Tian, Alexander P. Kreuzer, Pai-Hung Chen, and Hans-Martin Will. 2019. WaLDORf: Wasteless Language-model Distillation On Reading-comprehension. arXiv preprint arXiv: 1912.06638.", "links": null }, "BIBREF142": { "ref_id": "b142", "title": "A Cross-Task Analysis of Text Span Representations", "authors": [ { "first": "Shubham", "middle": [], "last": "Toshniwal", "suffix": "" }, { "first": "Haoyue", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Lingyu", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Livescu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "166--176", "other_ids": { "DOI": [ "10.18653/v1/2020.repl4nlp-1.20" ] }, "num": null, "urls": [], "raw_text": "Shubham Toshniwal, Haoyue Shi, Bowen Shi, Lingyu Gao, Karen Livescu, and Kevin Gimpel. 2020. A Cross-Task Analysis of Text Span Representations. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 166-176, Online. Associ- ation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/2020 .repl4nlp-1.20", "links": null }, "BIBREF143": { "ref_id": "b143", "title": "Small and Practical BERT Models for Sequence Labeling", "authors": [ { "first": "Henry", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Riesa", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Amelia", "middle": [], "last": "Archer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D19-1374" ], "arXiv": [ "arXiv:1909.00100" ] }, "num": null, "urls": [], "raw_text": "Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019. Small and Practical BERT Models for Sequence Labeling. arXiv preprint arXiv:1909.00100. DOI: https://doi.org /10.18653/v1/D19-1374", "links": null }, "BIBREF144": { "ref_id": "b144", "title": "Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation", "authors": [ { "first": "Iulia", "middle": [], "last": "Turc", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.08962" ] }, "num": null, "urls": [], "raw_text": "Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation. arXiv preprint arXiv:1908.08962.", "links": null }, "BIBREF145": { "ref_id": "b145", "title": "Quantity doesn't buy quality syntax with neural language models", "authors": [ { "first": "Aaron", "middle": [], "last": "Marten Van Schijndel", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5831--5837", "other_ids": { "DOI": [ "10.18653/v1/D19-1592" ] }, "num": null, "urls": [], "raw_text": "Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn't buy quality syntax with neural language models. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), pages 5831-5837, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1592", "links": null }, "BIBREF146": { "ref_id": "b146", "title": "Attention is All you Need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, pages 5998-6008.", "links": null }, "BIBREF147": { "ref_id": "b147", "title": "Visualizing Attention in Transformer-Based Language Representation Models", "authors": [ { "first": "Jesse", "middle": [], "last": "Vig", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.02679" ] }, "num": null, "urls": [], "raw_text": "Jesse Vig. 2019. Visualizing Attention in Transformer-Based Language Representation Models. arXiv:1904.02679 [cs, stat].", "links": null }, "BIBREF148": { "ref_id": "b148", "title": "Analyzing the Structure of Attention in a Transformer Language Model", "authors": [ { "first": "Jesse", "middle": [], "last": "Vig", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "63--76", "other_ids": { "DOI": [ "10.18653/v1/W19-4808" ] }, "num": null, "urls": [], "raw_text": "Jesse Vig and Yonatan Belinkov. 2019. Analyzing the Structure of Attention in a Transformer Language Model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyz- ing and Interpreting Neural Networks for NLP, pages 63-76, Florence, Italy. Associ- ation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/W19 -4808", "links": null }, "BIBREF149": { "ref_id": "b149", "title": "Parsing as pretraining", "authors": [ { "first": "David", "middle": [], "last": "Vilares", "suffix": "" }, { "first": "Michalina", "middle": [], "last": "Strzyz", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "G\u00f3mez-Rodr\u00edguez", "suffix": "" } ], "year": 2020, "venue": "Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6446" ] }, "num": null, "urls": [], "raw_text": "David Vilares, Michalina Strzyz, Anders S\u00f8gaard, and Carlos G\u00f3mez-Rodr\u00edguez. 2020. Parsing as pretraining. In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20). DOI: https://doi.org/10.1609/aaai.v34i05 .6446", "links": null }, "BIBREF150": { "ref_id": "b150", "title": "The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4387--4397", "other_ids": { "DOI": [ "10.18653/v1/D19-1448" ] }, "num": null, "urls": [], "raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objec- tives. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4387-4397. DOI: https://doi.org/10.18653/v1/D19 -1448", "links": null }, "BIBREF151": { "ref_id": "b151", "title": "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "David", "middle": [], "last": "Talbot", "suffix": "" }, { "first": "Fedor", "middle": [], "last": "Moiseev", "suffix": "" }, { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P19-1580" ], "arXiv": [ "arXiv:1905.09418" ] }, "num": null, "urls": [], "raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019b. Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned. arXiv preprint arXiv:1905.09418. DOI: https://doi.org/10.18653/v1/P19 -1580", "links": null }, "BIBREF152": { "ref_id": "b152", "title": "Information-Theoretic Probing with Minimum Description Length", "authors": [ { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.12298" ] }, "num": null, "urls": [], "raw_text": "Elena Voita and Ivan Titov. 2020. Information- Theoretic Probing with Minimum Description Length. arXiv:2003.12298 [cs].", "links": null }, "BIBREF153": { "ref_id": "b153", "title": "Universal Adversarial Triggers for Attacking and Analyzing NLP", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Kandpal", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2153--2162", "other_ids": { "DOI": [ "10.18653/v1/D19-1221" ] }, "num": null, "urls": [], "raw_text": "Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019a. Uni- versal Adversarial Triggers for Attacking and Analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153-2162, Hong Kong, China. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19 -1221", "links": null }, "BIBREF154": { "ref_id": "b154", "title": "Do NLP Models Know Numbers? Probing Numeracy in Embeddings", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Yizhong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/D19-1534" ], "arXiv": [ "arXiv:1909.07940" ] }, "num": null, "urls": [], "raw_text": "Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019b. Do NLP Models Know Numbers? Probing Numeracy in Embeddings. arXiv preprint arXiv:1909. 07940. DOI: https://doi.org/10.18653 /v1/D19-1534", "links": null }, "BIBREF155": { "ref_id": "b155", "title": "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Amapreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "353--355", "other_ids": { "DOI": [ "10.18653/v1/W18-5446" ] }, "num": null, "urls": [], "raw_text": "Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A Multi-Task Bench- mark and Analysis Platform for Natural Lan- guage Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Ana- lyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Asso- ciation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/W18 -5446", "links": null }, "BIBREF156": { "ref_id": "b156", "title": "Daxin Jiang, and Ming Zhou. 2020a. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters", "authors": [ { "first": "Ruize", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Zhongyu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jianshu", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Guihong", "middle": [], "last": "Cao", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.01808" ] }, "num": null, "urls": [], "raw_text": "Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2020a. K- Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. arXiv:2002.01808 [cs].", "links": null }, "BIBREF157": { "ref_id": "b157", "title": "Struct-BERT: Incorporating Language Structures into Pre-Training for Deep Language Understanding", "authors": [ { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Bi", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zuyi", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Liwei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.04577" ] }, "num": null, "urls": [], "raw_text": "Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. 2019a. Struct- BERT: Incorporating Language Structures into Pre-Training for Deep Language Understand- ing. arXiv:1908.04577 [cs].", "links": null }, "BIBREF158": { "ref_id": "b158", "title": "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers", "authors": [ { "first": "Wenhui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Hangbo", "middle": [], "last": "Bao", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.10957" ] }, "num": null, "urls": [], "raw_text": "Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020b. MiniLM: Deep Self-Attention Distillation for Task- Agnostic Compression of Pre-Trained Trans- formers. arXiv preprint arXiv:2002.10957.", "links": null }, "BIBREF159": { "ref_id": "b159", "title": "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation", "authors": [ { "first": "Xiaozhi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Zhaocheng", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Juanzi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.06136" ] }, "num": null, "urls": [], "raw_text": "Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2020c. KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Repre- sentation. arXiv:1911.06136 [cs].", "links": null }, "BIBREF160": { "ref_id": "b160", "title": "Leyang Cui, and Yue Zhang. 2020d. How Can BERT Help Lexical Semantics Tasks", "authors": [ { "first": "Yile", "middle": [], "last": "Wang", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.02929" ] }, "num": null, "urls": [], "raw_text": "Yile Wang, Leyang Cui, and Yue Zhang. 2020d. How Can BERT Help Lexical Semantics Tasks? arXiv:1911.02929 [cs].", "links": null }, "BIBREF161": { "ref_id": "b161", "title": "Cross-Lingual Ability of Multilingual BERT: An Empirical Study", "authors": [ { "first": "Zihan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.07840" ] }, "num": null, "urls": [], "raw_text": "Zihan Wang, Stephen Mayhew, Dan Roth, et al. 2019b. Cross-Lingual Ability of Multilingual BERT: An Empirical Study. arXiv preprint arXiv:1912.07840.", "links": null }, "BIBREF162": { "ref_id": "b162", "title": "Can neural networks acquire a structural bias from raw linguistic data?", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Warstadt and Samuel R. Bowman. 2020. Can neural networks acquire a structural bias from raw linguistic data? In Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society. Online.", "links": null }, "BIBREF163": { "ref_id": "b163", "title": "Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Ioana", "middle": [], "last": "Grosu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Hagen", "middle": [], "last": "Blix", "suffix": "" }, { "first": "Yining", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Alsop", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Bordia", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2870--2880", "other_ids": { "DOI": [ "10.18653/v1/D19-1286" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, et al. 2019. Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2870-2880. DOI: https://doi.org/10.18653/v1/D19-1286", "links": null }, "BIBREF164": { "ref_id": "b164", "title": "Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings", "authors": [ { "first": "Gregor", "middle": [], "last": "Wiedemann", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Remus", "suffix": "" }, { "first": "Avi", "middle": [], "last": "Chawla", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.10430" ] }, "num": null, "urls": [], "raw_text": "Gregor Wiedemann, Steffen Remus, Avi Chawla, and Chris Biemann. 2019. Does BERT Make Any Sense? Interpretable Word Sense Disambiguation with Contextualized Embeddings. arXiv preprint arXiv:1909.10430.", "links": null }, "BIBREF165": { "ref_id": "b165", "title": "Association for Computational Linguistics", "authors": [ { "first": "Sarah", "middle": [], "last": "Wiegreffe", "suffix": "" }, { "first": "Yuval", "middle": [], "last": "Pinter", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "11--20", "other_ids": { "DOI": [ "10.18653/v1/D19-1002" ] }, "num": null, "urls": [], "raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Atten- tion is not not Explanation. In Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Associ- ation for Computational Linguistics. DOI: https://doi.org/10.18653/v1/D19-1002", "links": null }, "BIBREF166": { "ref_id": "b166", "title": "R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2020. HuggingFace's Transformers: State-of-the-Art Natural Language Processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.03771" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, and Jamie Brew. 2020. HuggingFace's Transformers: State-of-the-Art Natural Language Processing. arXiv:1910. 03771 [cs].", "links": null }, "BIBREF167": { "ref_id": "b167", "title": "Pay Less Attention with Lightweight and Dynamic Convolutions", "authors": [ { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019a. Pay Less Attention with Lightweight and Dynamic Convolutions. In International Conference on Learning Representations.", "links": null }, "BIBREF168": { "ref_id": "b168", "title": "Conditional BERT Contextual Augmentation", "authors": [ { "first": "Xing", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Shangwen", "middle": [], "last": "Lv", "suffix": "" }, { "first": "Liangjun", "middle": [], "last": "Zang", "suffix": "" }, { "first": "Jizhong", "middle": [], "last": "Han", "suffix": "" }, { "first": "Songlin", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2019, "venue": "ICCS 2019: Computational Science ICCS 2019", "volume": "", "issue": "", "pages": "84--95", "other_ids": { "DOI": [ "10.1007/978-3-030-22747-0_7" ] }, "num": null, "urls": [], "raw_text": "Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019b. Con- ditional BERT Contextual Augmentation. In ICCS 2019: Computational Science ICCS 2019, pages 84-95. Springer. DOI: https:// doi.org/10.1007/978-3-030-22747-0 7", "links": null }, "BIBREF169": { "ref_id": "b169", "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Macherey", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation.", "links": null }, "BIBREF170": { "ref_id": "b170", "title": "Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT", "authors": [ { "first": "Zhiyong", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Kao", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4166--4176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu. 2020. Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4166-4176, Online. Association for Computational Linguistics.", "links": null }, "BIBREF171": { "ref_id": "b171", "title": "Compressing BERT by Progressive Module Replacing", "authors": [ { "first": "Canwen", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wangchunshu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.02925" ] }, "num": null, "urls": [], "raw_text": "Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. 2020. BERT-of-Theseus: Compressing BERT by Progressive Module Replacing. arXiv preprint arXiv:2002.02925.", "links": null }, "BIBREF172": { "ref_id": "b172", "title": "Deepening Hidden Representations from Pre-Trained Language Models for Natural Language Understanding", "authors": [ { "first": "Junjie", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.01940" ] }, "num": null, "urls": [], "raw_text": "Junjie Yang and Hai Zhao. 2019. Deepening Hidden Representations from Pre-Trained Lan- guage Models for Natural Language Under- standing. arXiv:1911.01940 [cs].", "links": null }, "BIBREF174": { "ref_id": "b174", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "authors": [ { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.08237" ] }, "num": null, "urls": [], "raw_text": "Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv:1906.08237 [cs].", "links": null }, "BIBREF175": { "ref_id": "b175", "title": "TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8413--8426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretrain- ing for Joint Understanding of Textual and Tabular Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8413-8426, Online. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF177": { "ref_id": "b177", "title": "Learning and Evaluating General Linguistic Intelligence", "authors": [ { "first": "Wang", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.11373" ] }, "num": null, "urls": [], "raw_text": "Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and Evaluat- ing General Linguistic Intelligence. arXiv: 1901.11373 [cs, stat].", "links": null }, "BIBREF178": { "ref_id": "b178", "title": "Large Batch Optimization for Deep Learning: Training BERT in 76 Minutes", "authors": [ { "first": "Yang", "middle": [], "last": "You", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sashank", "middle": [], "last": "Reddi", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Hseu", "suffix": "" }, { "first": "Sanjiv", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Srinadh", "middle": [], "last": "Bhojanapalli", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Song", "suffix": "" }, { "first": "James", "middle": [], "last": "Demmel", "suffix": "" }, { "first": "Cho-Jui", "middle": [], "last": "Hsieh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.00962" ] }, "num": null, "urls": [], "raw_text": "Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, and Cho-Jui Hsieh. 2019. Large Batch Optimization for Deep Learning: Training BERT in 76 Minutes. arXiv preprint arXiv:1904.00962, 1(5).", "links": null }, "BIBREF179": { "ref_id": "b179", "title": "GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference", "authors": [ { "first": "Ali", "middle": [], "last": "Hadi Zadeh", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Moshovos", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/MICRO50266.2020.00071" ], "arXiv": [ "arXiv:2005.03842" ] }, "num": null, "urls": [], "raw_text": "Ali Hadi Zadeh and Andreas Moshovos. 2020. GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference. arXiv:2005.03842 [cs, stat]. DOI: https://doi.org/10.1109/MICRO50266 .2020.00071", "links": null }, "BIBREF180": { "ref_id": "b180", "title": "Q8BERT: Quantized 8bit BERT", "authors": [ { "first": "Ofir", "middle": [], "last": "Zafrir", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Boudoukh", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Izsak", "suffix": "" }, { "first": "Moshe", "middle": [], "last": "Wasserblat", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.06188" ] }, "num": null, "urls": [], "raw_text": "Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8BERT: Quantized 8bit BERT. arXiv preprint arXiv:1910.06188.", "links": null }, "BIBREF181": { "ref_id": "b181", "title": "HellaSwag: Can a Machine Really Finish Your Sentence?", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4791--4800", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a Machine Really Finish Your Sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800.", "links": null }, "BIBREF182": { "ref_id": "b182", "title": "ERNIE: Enhanced Language Representation with Informative Entities", "authors": [ { "first": "Zhengyan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Han", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1441--1451", "other_ids": { "DOI": [ "10.18653/v1/P19-1139" ] }, "num": null, "urls": [], "raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced Language Representa- tion with Informative Entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441-1451, Florence, Italy. Association for Computa- tional Linguistics. DOI: https://doi.org /10.18653/v1/P19-1139", "links": null }, "BIBREF183": { "ref_id": "b183", "title": "Semantics-aware BERT for Language Understanding", "authors": [ { "first": "Zhuosheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yuwei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zuchao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shuailiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2020, "venue": "", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020. Semantics-aware BERT for Language Understanding. In AAAI 2020.", "links": null }, "BIBREF184": { "ref_id": "b184", "title": "Extreme Language Model Compression with Optimal Subwords and Shared Projections", "authors": [ { "first": "Sanqiang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Raghav", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Denny", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11687" ] }, "num": null, "urls": [], "raw_text": "Sanqiang Zhao, Raghav Gupta, Yang Song, and Denny Zhou. 2019. Extreme Language Model Compression with Optimal Subwords and Shared Projections. arXiv preprint arXiv:1909.11687.", "links": null }, "BIBREF185": { "ref_id": "b185", "title": "How does BERT's attention change when you finetune? An analysis methodology and a case study in negation scope", "authors": [ { "first": "Yiyun", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bethard", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.429" ] }, "num": null, "urls": [], "raw_text": "Yiyun Zhao and Steven Bethard. 2020. How does BERT's attention change when you fine- tune? An analysis methodology and a case study in negation scope. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 4729-4747, Online. Association for Computational Linguis- tics. DOI: https://doi.org/10.18653 /v1/2020.acl-main.429, PMCID: PMC7660194", "links": null }, "BIBREF186": { "ref_id": "b186", "title": "Improving BERT Fine-tuning with Embedding Normalization", "authors": [ { "first": "Wenxuan", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Junyi", "middle": [], "last": "Du", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03918" ] }, "num": null, "urls": [], "raw_text": "Wenxuan Zhou, Junyi Du, and Xiang Ren. 2019. Improving BERT Fine-tuning with Embedding Normalization. arXiv preprint arXiv:1911. 03918.", "links": null }, "BIBREF187": { "ref_id": "b187", "title": "Evaluating Commonsense in Pre-Trained Language Models", "authors": [ { "first": "Xuhui", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Leyang", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Dandan", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "AAAI 2020", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1609/aaai.v34i05.6523" ] }, "num": null, "urls": [], "raw_text": "Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2020. Evaluating Commonsense in Pre-Trained Language Models. In AAAI 2020. DOI: https://doi.org/10.1609 /aaai.v34i05.6523", "links": null }, "BIBREF188": { "ref_id": "b188", "title": "FreeLB: Enhanced Adversarial Training for Language Understanding", "authors": [ { "first": "Chen", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Goldstein", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.11764" ] }, "num": null, "urls": [], "raw_text": "Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2019. FreeLB: Enhanced Adversarial Training for Language Understanding. arXiv:1909.11764 [cs].", "links": null } }, "ref_entries": { "FIGREF0": { "text": "BERT world knowledge", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Attention patterns in BERT(Kovaleva et al., 2019).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "BERT layer transferability (columns correspond to probing tasks,.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "(2019a); Zhang et al. (2019) include entity embeddings as input for training BERT, while Poerner et al. (2019) adapt entity vectors to BERT representations. As mentioned above, Wang et al. (2020c) integrate knowledge not through entity", "num": null, "uris": null, "type_str": "figure" }, "FIGREF4": { "text": "Pre-trained weights help BERT find wider optima in fine-tuning on MRPC (right) than training from scratch (left)(Hao et al., 2019).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF5": { "text": "[SEP] rather than on linguistically interpretable patterns. It is understandable why fine-tuning would increase the attention to [CLS], but not [SEP]. If Clark et al. (2019) are correct that [SEP] serves as ''no-op'' indicator, fine-tuning basically tells BERT what to ignore.", "num": null, "uris": null, "type_str": "figure" }, "TABREF2": { "num": null, "content": "", "type_str": "table", "text": "Comparison of BERT compression studies. Compression, performance retention, and inference time speedup figures are given with respect to BERT base , unless indicated otherwise. Performance retention is measured as a ratio of average scores achieved by a given model and by BERT base . The subscript in the model description reflects the number of layers used.", "html": null } } } }