{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:40.083979Z" }, "title": "BERTering RAMS: What and How Much does BERT Already Know About Event Arguments? -A Study on the RAMS Dataset", "authors": [ { "first": "Varun", "middle": [], "last": "Gangal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "vgangal@cs.cmu.edu" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "hovy@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Using the attention map based probing framework from (Clark et al., 2019), we observe that, on the RAMS dataset (Ebner et al., 2020) 1 , BERT's attention heads 2 have modest but well above-chance ability to spot event arguments sans any training or domain finetuning, varying from a low of 17.77% for Place to a high of 51.61% for Artifact. Next, we find that linear combinations of these heads, estimated with \u224811% of available total event argument detection supervision, can push performance wellhigher for some roles-highest two being Victim (68.29% Accuracy) and Artifact (58.82% Accuracy). Furthermore, we investigate how well our methods do for cross-sentence event arguments. We propose a procedure to isolate \"best heads\" for cross-sentence argument detection separately of those for intra-sentence arguments. The heads thus estimated have superior cross-sentence performance compared to their jointly estimated equivalents, albeit only under the unrealistic assumption that we already know the argument is present in another sentence. Lastly, we seek to isolate to what extent our numbers stem from lexical frequency based associations between gold arguments and roles. We propose NONCE, a scheme to create adversarial test examples by replacing gold arguments with randomly generated \"nonce\" words. We find that learnt linear combinations are robust to NONCE, though individual best heads can be more sensitive.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Using the attention map based probing framework from (Clark et al., 2019), we observe that, on the RAMS dataset (Ebner et al., 2020) 1 , BERT's attention heads 2 have modest but well above-chance ability to spot event arguments sans any training or domain finetuning, varying from a low of 17.77% for Place to a high of 51.61% for Artifact. Next, we find that linear combinations of these heads, estimated with \u224811% of available total event argument detection supervision, can push performance wellhigher for some roles-highest two being Victim (68.29% Accuracy) and Artifact (58.82% Accuracy). Furthermore, we investigate how well our methods do for cross-sentence event arguments. We propose a procedure to isolate \"best heads\" for cross-sentence argument detection separately of those for intra-sentence arguments. The heads thus estimated have superior cross-sentence performance compared to their jointly estimated equivalents, albeit only under the unrealistic assumption that we already know the argument is present in another sentence. Lastly, we seek to isolate to what extent our numbers stem from lexical frequency based associations between gold arguments and roles. We propose NONCE, a scheme to create adversarial test examples by replacing gold arguments with randomly generated \"nonce\" words. We find that learnt linear combinations are robust to NONCE, though individual best heads can be more sensitive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The NLP representation paradigm has undergone a drastic change in this decade -moving from lin- 1 Refer to Figure 1 of that paper for an example illustrating four role names. Since these role names are human readable and intuitively named, we refer to them without elaboration. 2 We use map to refer to the per-example word-word activations at a particular layer-head, while head refers either to the identity of the particular layer-head. We ground these terms more clearly in \u00a72.1 guistic/task motivated 0-1 feature families to perword-type pretrained vectors (Pennington et al., 2014) to contextual embeddings (Peters et al., 2018) .", "cite_spans": [ { "start": 96, "end": 97, "text": "1", "ref_id": null }, { "start": 278, "end": 279, "text": "2", "ref_id": null }, { "start": 562, "end": 587, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF17" }, { "start": 613, "end": 634, "text": "(Peters et al., 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 107, "end": 115, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Contextual embeddings (CEs) produce incontext representations for each token -the representation framework being a large, pretrained encoder with per-token outputs. The typical procedure to use CEs for a downstream task is to add one or more task layers atop each token, or for a designated token per-sentence, depending on the nature of the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task layers (and optionally, the representation) are then \"finetuned\" using a task specific loss, albeit with a slower training rate than would be used for from-scratch training. ELMo (Peters et al., 2018) was an early CE. The three-fold recipe of a transformer based architecture, masked language modelling objective and large pre-training corpora, starting with BERT (Devlin et al., 2018) led to CEs which were vastly effective for most tasks.", "cite_spans": [ { "start": 188, "end": 209, "text": "(Peters et al., 2018)", "ref_id": "BIBREF18" }, { "start": 373, "end": 394, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The strong performance of contextual representations with just shallow task layers and minimal finetuning drove the urge to understand what and how much these models already knew about aspects of syntax and semantics. The study of methods and analysis to do this has come to be called probing. Besides \"explaining\" CE featurization, probing can aid in finding lacunae to be addressed by future representations. Linzen et al. (2016) , one of the early works on probing, evaluated whether language models could predict the correct verb form agreeing with the noun. Marvin and Linzen (2018) generalized this approach beyond single-word gaps with a larger suite of \"minimal pairs\". They also control for lexical confounding and expand the probing to new aspects such as reflexive anaphora and NPIs. Gulordava et al. (2018) evaluate subject-verb agreement but only through \"nonce\" sentences to con-trol for both lexical confounding and memorization 3 . Lakretz et al. (2019) isolate units of LSTM language models whose activations closely track verb-noun number agreement, particularly for hard, long-distance cases. Clark et al. (2019) , whose probing methods we adopt, examine if BERT attention heads capture dependency structure.", "cite_spans": [ { "start": 411, "end": 431, "text": "Linzen et al. (2016)", "ref_id": "BIBREF14" }, { "start": 563, "end": 587, "text": "Marvin and Linzen (2018)", "ref_id": "BIBREF16" }, { "start": 795, "end": 818, "text": "Gulordava et al. (2018)", "ref_id": "BIBREF8" }, { "start": 948, "end": 969, "text": "Lakretz et al. (2019)", "ref_id": "BIBREF13" }, { "start": 1112, "end": 1131, "text": "Clark et al. (2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we probe what and how much a pretrained BERT representation already knows about event roles and their arguments. Understanding how well event arguments are represented can be a first foray into understanding other aspects about events. Extraction of event arguments is often a prerequisite for more complex event tasks. Some examples are event coreference (Lu and Ng, 2018) , detecting event-event temporal (Vashishtha et al., 2019) and causal relations (Dunietz et al., 2017) , sub-event structure (Araki et al., 2014) and generating approximate causal paths (Kang et al., 2017) . Tuples of event-type and arguments are one way of inducing script like-structures (Chambers and Jurafsky, 2008) . In summary, our work makes the following contributions:", "cite_spans": [ { "start": 370, "end": 387, "text": "(Lu and Ng, 2018)", "ref_id": "BIBREF15" }, { "start": 421, "end": 446, "text": "(Vashishtha et al., 2019)", "ref_id": "BIBREF23" }, { "start": 468, "end": 490, "text": "(Dunietz et al., 2017)", "ref_id": "BIBREF6" }, { "start": 513, "end": 533, "text": "(Araki et al., 2014)", "ref_id": "BIBREF1" }, { "start": 574, "end": 593, "text": "(Kang et al., 2017)", "ref_id": "BIBREF11" }, { "start": 678, "end": 707, "text": "(Chambers and Jurafsky, 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We show that there always exists a BERT attention head (BESTHEAD) with above-chance ability to detect arguments, for a given event role. We also show that this ability is even stronger through learnt linear combinations (LINEAR) of heads. 2. We notice a relative weakness at detecting cross sentence arguments ( \u00a73.3). Motivated by this, we devise a procedure to isolate only the cross-sentence argument detection ability of heads w.r.t a role ( \u00a73.3.1). Our procedure considerably improves cross-sentence performance for some roles ( \u00a73.4), especially for INSTRUMENT and PLACE. 3. Lastly, we seek to isolate how much of the zero-shot argument detection ability originates solely from the model's world knowledge and lexical frequency based associations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To do this, we propose NONCE, a method to perturb test examples to dampen such associations ( \u00a72.5.3). We find that the LINEAR approach is robust to NONCE perturbation, while BESTHEAD is more sensitive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3 A motivation for our ablation in \u00a72.5.3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2.1 Background", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "2" }, { "text": "The Transformer architecture (Vaswani et al., 2017) consists of |L| layers, each comprised of |H| > 1 \"self-attention\" heads. Here, we describe the architecture just enough to ground terminology -we defer to the original work for detailed exposition. In a given layer l 4 , a single self-attention head h consists of three steps -First, query, key and value projections q", "cite_spans": [ { "start": 29, "end": 51, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": "h i = Q h T e i , k h i = K h T e i , v h i = V h", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": "T e i are computed from the previous layer's token embedding e i . Then, softmax normal-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": "ized dot products \u03b1 h ij = (q h i ) T k j h m (q h i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": "T km h are computed between the current token's query projection and other token's key projections. These dot products a.k.a attention values are then used as weights to combine all token value projec-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": "tions -o h i = j \u03b1 h ij v h j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": "gives the current head's token output o h i . Finally, the outputs from all heads are concatenated and projected to get the per-token embeddings for the current layer", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": "o i = W T Concat({o 0 i , o 1 i . . . o |H|\u22121 i }) Henceforth, we refer to the parameter tuple {Q h,l , K h,l , V h,l } , uniquely identified by h \u2208 {0, 1, . . . |H| \u2212 1}, l \u2208 {0, 1, .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": ". . |L| \u2212 1} as the \"attention head\" or simply \"head\", while values \u03b1 h,l ij are collectively referred to as the \"attention map\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transformers", "sec_num": "2.1.1" }, { "text": "BERT uses a Transformer architecture with 12 heads and 12 layers 5 . It comes with an associated BPE tokenizer (Sennrich et al., 2015) is used in pretraining as described next.", "cite_spans": [ { "start": 111, "end": 134, "text": "(Sennrich et al., 2015)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "BERT", "sec_num": "2.1.2" }, { "text": "BERT follows a two-stage pretraining process. In the first stage, also known as masked language modelling (MLM), randomly selected token positions are replaced with [MASK] . The task is to predict the true identities of words at these positions, given the sequence. This stage uses single sentences as training examples. In the second stage, also known as next sentence prediction (NSP), the Figure 1 : In this example, the head chosen by BESTHEAD for the PLACE role, correctly picks out the argument \"Syria\" for the trigger \"capitulation\". Attention probabilities are shown as blue lines from trigger token to other tokens, with boldness indicating magnitude. It manages to evade distractor pronouns (there) and other geographical entity names (Russia and United States). The text above flows in right to left direction. The full text reads: \"Chances of intentional conflict are real as is the possibility of an unintended clash escalating . At the same time, Syria is not essential to the national security of Russia or the United States. It is not without importance but a defeat or capitulation there will not change the balance of power between them at all . . . \" model is given a pair of sentences (separated by [SEP]), with the task being to predict whether these were truly consecutive or not.", "cite_spans": [ { "start": 165, "end": 171, "text": "[MASK]", "ref_id": null } ], "ref_spans": [ { "start": 392, "end": 400, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "BERT", "sec_num": "2.1.2" }, { "text": "Unless otherwise mentioned, we use the bertbase-uncased model. We use the implementation of BERT from HuggingFace. 6 (Wolf et al., 2019)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BERT", "sec_num": "2.1.2" }, { "text": "We use the recently released RAMS dataset (Ebner et al., 2020) for all our experiments. The reasons for using this particular dataset for our analysis are", "cite_spans": [ { "start": 42, "end": 62, "text": "(Ebner et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2.2" }, { "text": "\u2022 It has a wide mix of reasonably frequent roles (represented well across splits) from different kinds of frames . Discussion on non-frequent roles can be found in \u00a73.6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2.2" }, { "text": "\u2022 For many roles, it has examples with the gold arguments being in a different sentence from the event trigger. This makes it easy to probe for intra-sentence and cross-sentence argument extraction in the same set of experiments. Analysis of cross-sentence performance can be found in \u00a73.3 and \u00a73.4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2.2" }, { "text": "We note that the dataset is in English (Bender and Friedman, 2018) and observations made may not generalize to other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2.2" }, { "text": "For example x, we refer to the event, role, gold argument and document as e, r, a and D. D is an 6 github.com/huggingface/transformers/ ordered sequence of tokens {w 0 , w 1 . . . w |D|\u22121 }. i e denotes the event trigger index 7 . We use the layer index l and head indices 0 to |H| \u2212 1 to index the respective head's attention distribution from token i to all other tokens j \u2208 D", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2.2.1" }, { "text": "at index i e . P * l,h (j|i e ) = \u03b1 iej l,h , 0 \u2264 l < |L|, 0 \u2264 h < |H| (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2.2.1" }, { "text": "Note, however that there exist a complementary set of attention values from each token j to the token i e . To use a unified indexing scheme to refer to these values, we use negative indices from \u22121 to \u2212|H| as their head indices. Since these values come from attention-head activations of different positions, they need to be renormalized to use them as probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2.2.1" }, { "text": "P * l,\u2212h (j|i e ) = \u03b1 jie l,h\u22121 k\u2208D \u03b1 kie l,h\u22121 , 0 < h \u2264 |H| (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2.2.1" }, { "text": "Our above framework assumed that the attention maps are between whole word tokens. However, BERT represents a sentence as a sequence of BPEsubwords at every level, including for the attention maps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Words and Subwords", "sec_num": "2.2.2" }, { "text": "We use the quite intuitive approach described in Section 4.1 of (Clark et al., 2019) -incoming attentions to constitutent subwords of a word are added to get the attention to that word. Outgoing attention values from constituent subwords are averaged to get the outgoing attention value from the word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Words and Subwords", "sec_num": "2.2.2" }, { "text": "Note that the above operations precede the probability computations in Equations 1 and 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Words and Subwords", "sec_num": "2.2.2" }, { "text": "We follow the practice of earlier probing works such as (Sorodoc et al., 2020) and (Linzen et al., 2016) of using one of the smaller splits for training. Specifically, we use the original dev split of RAMS (924 examples in total) as our training split. Note that each example could contain multiple roleargument pairs. ", "cite_spans": [ { "start": 56, "end": 78, "text": "(Sorodoc et al., 2020)", "ref_id": "BIBREF22" }, { "start": 83, "end": 104, "text": "(Linzen et al., 2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Splits", "sec_num": "2.2.3" }, { "text": "For a given event e and role r, we define a predicted argument token index a to be accurate if it corresponds to any of the tokens in the gold argument span [a beg r,e , a end r,e ]. This is described formally in Equation 3. I stands for the 0-1 indicator function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measure", "sec_num": "2.3" }, { "text": "Acc e,r,a ( a) = I(a beg r,e \u2264 a < a end r,e )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measure", "sec_num": "2.3" }, { "text": "Typical measures of argument extraction differ from the one we use, being span-based. Given the limitations of our probing approaches, we lack a clear mechanism of predicting multi-word spans, and can only predict likely single tokens for the argument, which led us to choose this measure 8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measure", "sec_num": "2.3" }, { "text": "Let X = {e m , r m , a m } m=M m=1 be the training set. X r is the subset of training examples with r m = r. For each role r, BESTHEAD selects the head {l, h} best (r) with best aggregate accuracy on X r . Other than one pass over the training set for comparing aggregate accuracies of heads for each role, there is no learning required for this method. At test-time, based on the test role, the respective best head is used to predict the argument token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BESTHEAD", "sec_num": "2.4.1" }, { "text": "Acc Xr l,h = m=Mr m=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BESTHEAD", "sec_num": "2.4.1" }, { "text": "Acce m,r,am (arg max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BESTHEAD", "sec_num": "2.4.1" }, { "text": "j P l,h (j|ie m )) {l, h} best (r) = arg max l,h Acc Xr l,h 2.4.2 LINEAR", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BESTHEAD", "sec_num": "2.4.1" }, { "text": "The LINEAR model learns a weighted linear combination of all |L| \u00d7 |H| \u00d7 2 head distributions (twice for the \"from\" and \"to\" heads).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BESTHEAD", "sec_num": "2.4.1" }, { "text": "\u03c6(j|i) = l=|L|\u22121 l=0 h=|H|\u22121 h=\u2212|H| w l,h P l,h (j|i) + B P (j|i) = \u03c6(j|i) k=|D|\u22121 k=0 \u03c6(k|i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "BESTHEAD", "sec_num": "2.4.1" }, { "text": "Note that gradients are not backpropagated into BERT -only the linear layer parameters w l,h , B are updated during backpropagation. This formulation is the same as the one in (Clark et al., 2019) . For our loss function, we use the KL Divergence KL( P ||P ) between the predicted distribution over document tokens P and the gold distribution over document tokens P . For the gold distribution over arg tokens, the probability mass is equally distributed tokens in the argument span, with zero mass on the other tokens.", "cite_spans": [ { "start": 176, "end": 196, "text": "(Clark et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "BESTHEAD", "sec_num": "2.4.1" }, { "text": ") = k=|D|\u22121 k=0 P (k|i) log P (k|i) P (k|i) 2.5 Baselines 2.5.1 RAND", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KL( P ||P", "sec_num": null }, { "text": "The expected accuracy of following the strategy of randomly picking any token i from the document D as the argument (other than the trigger word i e itself). For a given role r with a gold argument a r,e of length |a r,e |, this equals |ar,e| |D|\u22121 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "KL( P ||P", "sec_num": null }, { "text": "The expected accuracy of following the strategy of randomly picking any token from the same sentence S e as the argument, save the event trigger itself. This is motivated by the intuition that event arguments mostly lie in-sentence. This equals |ar,e| |Se|\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SENTONLY", "sec_num": "2.5.2" }, { "text": "We wish to isolate how much of the heads performance is due to memorized \"world knowledge\" and typical lexical associations e.g Russia would typically always be a PLACE or TARGET. Recent works have shown that BERT does retain such associations, including for first names (Shwartz et al., 2020) , and enough so that it can act as a reasonable knowledge base (Petroni et al., 2019) .", "cite_spans": [ { "start": 271, "end": 293, "text": "(Shwartz et al., 2020)", "ref_id": "BIBREF21" }, { "start": 357, "end": 379, "text": "(Petroni et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "NONCE procedure", "sec_num": "2.5.3" }, { "text": "One way of implementing this is to create perturbed test examples where gold arguments are replaced with synthetically created \"nonce\" words not necessarily related to the context. This is similar to the approach of (Gulordava et al., 2018 ).", "cite_spans": [ { "start": 216, "end": 239, "text": "(Gulordava et al., 2018", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "NONCE procedure", "sec_num": "2.5.3" }, { "text": "\u2022 Each gold argument token is replaced by a randomly generated token with the same number of characters as the original string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NONCE procedure", "sec_num": "2.5.3" }, { "text": "\u2022 Stop words such as determiners, pronouns, and conjunctions are left unaltered, though they might be a part of the argument span.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NONCE procedure", "sec_num": "2.5.3" }, { "text": "\u2022 We also ensure that the shape of the original argument, i.e the profile of case, digit vs letter is maintained 9 . e.g Russia-15 can be randomly replaced by Vanjia-24, which has the same shape Xxxx-dd.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NONCE procedure", "sec_num": "2.5.3" }, { "text": "\u2022 Note that we do not take pronounceability of the nonce word into account. Though this could arguably be a relevant invariant to maintain, we were not sure of an apt way to enforce it automatically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NONCE procedure", "sec_num": "2.5.3" }, { "text": "\u2022 We also note that BERT may end up using a likely larger number of subword tokens to replace the nonce words than it would use for the gold argument token. Since these are essentially randomly composed tokens, they can contain subwords which are rarely seen in vocabulary tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NONCE procedure", "sec_num": "2.5.3" }, { "text": "We refer to this procedure as NONCE, and overloading the term, the test set so created as the NONCE test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NONCE procedure", "sec_num": "2.5.3" }, { "text": "In Table 2 , we record the accuracies and layer positions of best heads for the 15 most frequent roles.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Spotting the Best Head", "sec_num": "3.1" }, { "text": "1. BESTHEAD always has higher accuracy than the RAND and SENT baselines. 2. 5 of the 15 roles can be identified with 40%+ accuracy -the highest being COMMUNICA-TOR , at 51.61%. 3. The best head for arguments which are not together present in frames is often the same. For instance, Layer 8, Head 10 is the best head for TRANSPORTER, ATTACKER, COMMUNICA-TOR and BENEFICIARY. 4. Most best heads are located in the higher layers, specifically the 7th, 8th or 9th layers. An exception are the best head for DESTINATION and ARTIFACT roles, located in the 0th layer and 4th layers respectively. 5. Place roles are the hardest to identify, with an accuracy of 17.77%. 6. Layer 8, Head 10 seems to be doing a lot of the heavylifting. For 7 out of 15 roles, this is the best head. This shows that it is quite \"overworked\" in terms of the number of roles it tracks. Furthermore, though some of these role pairs are from different frames (e.g see Point 3 above), some aren't, e.g GIVER and BENEFICIARY. In such cases, atleast one of the two arguments predicted for these two roles is sure to be inaccurate -e.g the head would point to either the GIVER or BENEFI-CIARY, but not both. 10 7. Most of the best heads for roles are \"from\" heads rather than \"to\" heads, apart from those for ORIGIN, ATTACKER and ARTIFACT. Table 3 shows test accuracies for both LINEAR and BESTHEAD approaches, and also the baselines. For 12 of the 15 roles, LINEAR has higher accuracy than BESTHEAD. There are three exceptions -ORIGIN and INSTRUMENT, which suffer a decline While DESTINATION and PLACE do see increases in LINEAR compared to BESTHEAD, it could be the case that none of the individual heads are particularly good at capturing cross-sentence arguments for the other three roles, while the best head is already good enough to capture the intrasentence case. This would make LINEAR not any more rich as a hypothesis space compared to BEST-HEAD -causing the similar or slightly worse accuracy. In \u00a73.3 we dig deeper into the aspect of cross-sentence performance.", "cite_spans": [], "ref_spans": [ { "start": 1304, "end": 1311, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Spotting the Best Head", "sec_num": "3.1" }, { "text": "From Table 4 , we observe that both BESTHEAD and LINEAR performance degrades in the crosssentence case i.e when \"trigger sentence\" and \"gold argument sentence\" differ. Three potential reasons:", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Cross-Sentence Performance", "sec_num": "3.3" }, { "text": "1. There are too few instances of cross-sentence event arguments in the small supervised set we use. Furthermore, even if there are a sufficient quantum of cross-sentence event arguments, these form a much smaller proportion of the total instances in comparison to the intra-sentence instances. 2019), it is difficult for a single attention head to have a higher value for outside sentence tokens compared to in-sentence ones. 3. Different heads might be best for intra and cross-sentence performance, and finding one best head for both could be sub-optimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Sentence Performance", "sec_num": "3.3" }, { "text": "Motivated by the above reasons, we devise a procedure which we refer to as cross-sentence occlusion (CSO). Since Reason 1 is a property of the data distribution, we attempt to alleviate Reasons 2 and 3. To address Reason 3, we try to learn a different head (combination) for the cross-sentence case. To address Reason 2, while finding the best cross-sentence head, we zero-mask out the attention values corresponding to in-sentence 11 tokens and re-normalize the probability distribution. In practice, one would not be able to use two separate argument detectors for the intra and crosssentence cases for the same role, since groundtruth information of whether the argument is crosssentence would be unavailable. We assume this contrived setting only to allow easy analysis 12 , and to gloss over the lack of an intuitive zero-shot mechanism of switching between the two cases, when predicting arguments using just attention heads.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross Sentence Occlusion (CSO)", "sec_num": "3.3.1" }, { "text": "From Table 4 , we can observe the improvement in cross-sentence test accuracy when using the +CSO approach over its simple counterpart, both for BESTHEAD and LINEAR. The only exception to this is the ORIGIN role, where LINEAR betters LINEAR-CSO.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "+CSO Results", "sec_num": "3.4" }, { "text": "For the INSTRUMENT role, both BEST-HEAD+CSO and LINEAR+CSO get close to 50% accuracy. In part, their relatively stronger performance can be explained by BESTHEAD and LINEAR already being relatively better at detecting cross-sentence INSTRUMENT (just above 20%, but higher than the sub-15 accuracies on the other roles). Nevertheless, CSO still leads to a doubling of accuracies for both approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "+CSO Results", "sec_num": "3.4" }, { "text": "We highlight here again that these numbers are only on that subset of the test set where we know that the gold arguments are located in other sentences -though this setting is useful for analysis, a model actually solving this task won't have access to this information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "+CSO Results", "sec_num": "3.4" }, { "text": "Even in our case, there is no obvious way to have a consolidated probe which uses a LIN-EAR+CSO and LINEAR component together, since this would require learning an additional component which predicts whether the gold arguments lie intra-sentence or across-sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "+CSO Results", "sec_num": "3.4" }, { "text": "In Figures 2a and 2b , we compare the performances of our methods on perturbations of the test set created using the NONCE procedure outlined in \u00a72.5.3 with their normal test performance. Since NONCE is stochaistic, corresponding results are averaged over NONCE sets created with 5 different seeds. BESTHEAD test performance is more sensitive to NONCE than LINEAR. Especially for INSTRU-MENT, ARTIFACT and ORIGIN, the decrease in accuracy is quite drastic. Surprisingly, we also see increases for 4 of the 15 roles -DEFENDANT, GIVER, VICTIM and PARTICIPANT. All other roles see small decreases. For LINEAR, however, most roles are largely unmoved by NONCE, showing that LINEAR relies less on lexical associations.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 20, "text": "Figures 2a and 2b", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Effect of NONCE", "sec_num": "3.5" }, { "text": "So far, we've focussed on analyzing the 15 most frequent roles. In this subsection, we also evaluate our approaches for some non-frequent roles outside this set, such as PREVENTER and PROSECUTOR. The results are presented in Table 5 . Note that, owing to high sparsity for these roles, these results should be taken with \"a pinch of salt\" (which is why we chose to separate them out from the frequent roles).", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 232, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Non-Frequent Roles", "sec_num": "3.6" }, { "text": "For the frequent roles, we had seen that LINEAR was mostly better than, or equally good as BEST-HEAD. For the non-frequent roles, we see that the comparative performance of LINEAR vs BEST-HEAD varies a lot more -LINEAR is better for 6 of the 11 roles, and worse for the other 5. The fall in LINEAR performance is largest for PROSECUTOR (58.33 \u2192 16.67).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Frequent Roles", "sec_num": "3.6" }, { "text": "We conjecture that this drop is due to poor generalization as a result of learning from lesser supervision as a result of the roles being non-frequent. Since BESTHEAD has only two parameters (identity of the best head) compared to the 289 parameters of LINEAR, the latter is more sensitive to this problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Frequent Roles", "sec_num": "3.6" }, { "text": "Secondly, we notice that the gap between BEST-HEAD and the RAND and SENTONLY baselines is much narrower. For VEHICLE and MONEY, SEN-TONLY even outdoes BESTHEAD. For VEHICLE, the BESTHEAD accuracy even drops to 0. However, in all these cases, we find that LINEAR still manages to outdo both baselines. We conjecture that these cases could be due to the best head predicted not being very generalizable due to small training set size (for that role). Though LINEAR would also suffer from poor generalization in this case, it might stand its ground better since it relies on multiple heads rather than just one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Non-Frequent Roles", "sec_num": "3.6" }, { "text": "In our analysis so far, we have been using the same contextual embedding mechanism through- Table 5 : Test accuracies using all the baselines and probe approaches described in \u00a7 \u00a72.4 for some non frequent roles. Both BESTHEAD and LINEAR probes still outdo the baselines in most cases, but not as convincingly as for frequent roles. Unlike the frequent roles case, LINEAR actually does worse than BESTHEAD for many roles.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Cased vs Uncased", "sec_num": "3.7" }, { "text": "out, namely bert-base-uncased. In Figure 3a , we plot the difference of BESTHEAD test accuracies when using bert-base-cased vs bert-base-uncased.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 43, "text": "Figure 3a", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Cased vs Uncased", "sec_num": "3.7" }, { "text": "We can see that bert-base-uncased is better for most roles -except for Attacker, Victim and Artifact. We also notice that the best layer-head configuration {l best , h best } is mostly not preserved between the bert-base-cased and bert-base-uncased scenarios. The difference between bert-base-uncased and bert-base-cased is even more drastic in the cross sentence only experiment , for instance, while there exists a single head which can find cross-sentence Instrument args with 37% accuracy, the best such head of bert-base-cased has only 17% accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cased vs Uncased", "sec_num": "3.7" }, { "text": "In Figure 3 .8, we illustrate some examples of BEST-HEAD identifying arguments. We defer further discussion to Appendix \u00a7A owing to lack of space.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Qualitative Examples", "sec_num": "3.8" }, { "text": "A complete description of the large body of work on probing is beyond the scope of this paper. Besides those discussed earlier, other aspects studied include filler-gap dependencies (Wilcox et al., 2018) , function word comprehension (Kim et al., 2019) , sentence-level properties (Adi et al., 2016) and negative polarity items (Warstadt et al., 2019) .", "cite_spans": [ { "start": 182, "end": 203, "text": "(Wilcox et al., 2018)", "ref_id": "BIBREF26" }, { "start": 234, "end": 252, "text": "(Kim et al., 2019)", "ref_id": "BIBREF12" }, { "start": 281, "end": 299, "text": "(Adi et al., 2016)", "ref_id": "BIBREF0" }, { "start": 328, "end": 351, "text": "(Warstadt et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Probing is not limited to examining pairwise word relations or sentence properties. Hewitt and Manning (2019) find that BERT token representations are linearly projectable into a space where they embed constituency structure. Recently, Sorodoc et al. (2020) probed transformer based language models for coreference. However, they restrict themselves to entity coreference. Further- Hewitt and Liang (2019) raised a note of caution about classifier based probes, pointing out that probes themselves could be rich enough to learn certain phenomena even with random representations. We avoid direct classifier-based probing, thus avoiding the mentioned pitfalls.", "cite_spans": [ { "start": 84, "end": 109, "text": "Hewitt and Manning (2019)", "ref_id": "BIBREF10" }, { "start": 236, "end": 257, "text": "Sorodoc et al. (2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "We showed how BERT's attention heads have modest but well above chance ability to detect arguments for event roles. This ability is achievable either with only i) 2 parameters per role (BESTHEAD) ii) 289 parameters per role (LINEAR). Furthermore, the supervision required to reach this is just \u2248 11% of full training set size. Secondly, we propose a method to learning separate heads (combinations) for cross-sentence argument detection. Our experiments show that the heads so learnt have higher cross-sentence accuracy. Thirdly, we show that LINEAR performance is robust to a perturbed NONCE test setting with weakened lexical associations. In future, we plan to extend our probing to other event aspects like coreference and subevents. Figure 4 : In (a), BESTHEAD correctly picks out the TARGET of \"airstrike\" as \"Yemen\". In (b), BESTHEAD correctly picks out the RECIPIENT of \"advised\" as \"companies\". In (c), the token picked is coreferent but not identical to the gold argument. Attentions are shown as blue lines from trigger token, with lineweight \u221d value. Gold arguments are shaded green .", "cite_spans": [], "ref_spans": [ { "start": 738, "end": 746, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We omit layer index l in the rest of the passage to declutter our notation.5 For bert-base. bert-large uses 24 heads and 24 layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To simplify our analysis, we do not include multi-word triggers. These form only \u22481.6% of the cases in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We will interchangeably refer to Acc as just \"accuracy\" in plain-text in the rest of the paper", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We are aware that case mostly doesn't matter since we use bert-*-uncased in most experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It is quite non-intuitive for GIVER and BENEFICIARY spans to overlap -we don't see any examples with the same.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "RAMS comes with a given sentence segmentation.12 And also so that we can validate our diagnosis for poor cross-sentence performance in \u00a73.3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Nikita Moghe and Hiroaki Hayashi as well as the three anonymous reviewers for their valuable feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", "authors": [ { "first": "Yossi", "middle": [], "last": "Adi", "suffix": "" }, { "first": "Einat", "middle": [], "last": "Kermany", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Lavi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1608.04207" ] }, "num": null, "urls": [], "raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. arXiv preprint arXiv:1608.04207.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Detecting Subevent Structure for Event Coreference Resolution", "authors": [ { "first": "Jun", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "4553--4558", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Araki, Zhengzhong Liu, Eduard H Hovy, and Teruko Mitamura. 2014. Detecting Subevent Struc- ture for Event Coreference Resolution. In LREC, pages 4553-4558.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", "authors": [ { "first": "M", "middle": [], "last": "Emily", "suffix": "" }, { "first": "Batya", "middle": [], "last": "Bender", "suffix": "" }, { "first": "", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "587--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised learning of narrative event chains", "authors": [ { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL-08: HLT", "volume": "", "issue": "", "pages": "789--797", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Unsu- pervised learning of narrative event chains. In Pro- ceedings of ACL-08: HLT, pages 789-797.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "What does BERT look at? an analysis of BERT's attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.04341" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does BERT look at? an analysis of BERT's attention. arXiv preprint arXiv:1906.04341.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The BECauSE corpus 2.0: Annotating causality and overlapping relations", "authors": [ { "first": "Jesse", "middle": [], "last": "Dunietz", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" }, { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th Linguistic Annotation Workshop", "volume": "", "issue": "", "pages": "95--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Dunietz, Lori Levin, and Jaime G Carbonell. 2017. The BECauSE corpus 2.0: Annotating causal- ity and overlapping relations. In Proceedings of the 11th Linguistic Annotation Workshop, pages 95- 104.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multi-sentence argument linking", "authors": [ { "first": "Seth", "middle": [], "last": "Ebner", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Culkin", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Rawlins", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8057--8077", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.718" ] }, "num": null, "urls": [], "raw_text": "Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, and Benjamin Van Durme. 2020. Multi-sentence ar- gument linking. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 8057-8077, Online. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Colorless green recurrent networks dream hierarchically", "authors": [ { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.11138" ] }, "num": null, "urls": [], "raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Color- less green recurrent networks dream hierarchically. arXiv preprint arXiv:1803.11138.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Designing and interpreting probes with control tasks", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.03368" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Percy Liang. 2019. Designing and in- terpreting probes with control tasks. arXiv preprint arXiv:1909.03368.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129-4138.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Detecting and explaining causes from text for a time series event", "authors": [ { "first": "Dongyeop", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Varun", "middle": [], "last": "Gangal", "suffix": "" }, { "first": "Ang", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2758--2767", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, and Eduard Hovy. 2017. Detecting and explaining causes from text for a time series event. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2758-2767.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Probing what different NLP tasks teach machines about function word comprehension", "authors": [ { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.11544" ] }, "num": null, "urls": [], "raw_text": "Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, et al. 2019. Probing what different NLP tasks teach machines about function word comprehension. arXiv preprint arXiv:1904.11544.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The emergence of number and syntax units in LSTM language models", "authors": [ { "first": "Yair", "middle": [], "last": "Lakretz", "suffix": "" }, { "first": "German", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Theo", "middle": [], "last": "Desbordes", "suffix": "" }, { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "" }, { "first": "Stanislas", "middle": [], "last": "Dehaene", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.07435" ] }, "num": null, "urls": [], "raw_text": "Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Ba- roni. 2019. The emergence of number and syntax units in LSTM language models. arXiv preprint arXiv:1903.07435.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "521--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Event Coreference Resolution: A Survey of Two Decades of Research", "authors": [ { "first": "Jing", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2018, "venue": "IJCAI", "volume": "", "issue": "", "pages": "5479--5486", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Lu and Vincent Ng. 2018. Event Coreference Res- olution: A Survey of Two Decades of Research. In IJCAI, pages 5479-5486.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.09031" ] }, "num": null, "urls": [], "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. arXiv preprint arXiv:1808.09031.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deep contextualized word representations", "authors": [ { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.05365" ] }, "num": null, "urls": [], "raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Language models as knowledge bases? arXiv preprint", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Bakhtin", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [ "H" ], "last": "Miller", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.01066" ] }, "num": null, "urls": [], "raw_text": "Fabio Petroni, Tim Rockt\u00e4schel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.07909" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Latent Name Artifacts in Pre-trained Language Models", "authors": [ { "first": "Vered", "middle": [], "last": "Shwartz", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.03012" ] }, "num": null, "urls": [], "raw_text": "Vered Shwartz, Rachel Rudinger, and Oyvind Tafjord. 2020. \" you are grounded!\": Latent Name Artifacts in Pre-trained Language Models. arXiv preprint arXiv:2004.03012.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Probing for Referential information in Language Models", "authors": [ { "first": "Ionut", "middle": [], "last": "Sorodoc", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Boleda", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4177--4189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ionut Sorodoc, Kristina Gulordava, and Gemma Boleda. 2020. Probing for Referential information in Language Models. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4177-4189.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Fine-grained temporal relation extraction", "authors": [ { "first": "Siddharth", "middle": [], "last": "Vashishtha", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Aaron", "middle": [ "Steven" ], "last": "White", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.01390" ] }, "num": null, "urls": [], "raw_text": "Siddharth Vashishtha, Benjamin Van Durme, and Aaron Steven White. 2019. Fine-grained temporal relation extraction. arXiv preprint arXiv:1902.01390.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "vestigating Bert's knowledge of Language: Five Analysis Methods with NPIs", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Ioana", "middle": [], "last": "Grosu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Hagen", "middle": [], "last": "Blix", "suffix": "" }, { "first": "Yining", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Alsop", "suffix": "" }, { "first": "Shikha", "middle": [], "last": "Bordia", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.02597" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- gen Blix, Yining Nie, Anna Alsop, Shikha Bor- dia, Haokun Liu, Alicia Parrish, et al. 2019. In- vestigating Bert's knowledge of Language: Five Analysis Methods with NPIs. arXiv preprint arXiv:1909.02597.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "What do RNN Language models learn about Filler-Gap Dependencies? arXiv preprint", "authors": [ { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Morita", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.00042" ] }, "num": null, "urls": [], "raw_text": "Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN Language models learn about Filler-Gap Dependencies? arXiv preprint arXiv:1809.00042.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Difference in a) BESTHEAD and b) LINEAR accuracies over normal and NONCE test sets", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "\u2206 in Test accuracy of a) BESTHEAD b) LIN-EAR using bert-base-uncased vs bert-base-cased more, they exclude MLMs like BERT from their analysis.", "num": null }, "TABREF2": { "html": null, "content": "", "type_str": "table", "num": null, "text": "Split example counts and token sizes from the RAMS. Note that we use different splits since our work is a probing exercise." }, "TABREF4": { "html": null, "content": "
+ve h indices denote \"from\" heads, while -ve indices denote
\"to\" heads, as explained in \u00a72.2.1
", "type_str": "table", "num": null, "text": "Best layer-head pair , {l,h} best and % Accuracy for the 15 most frequent roles in RAMS, using bert-base-uncased." }, "TABREF6": { "html": null, "content": "
approaches described in \u00a7 \u00a72.4 for the 15 most frequent roles
in RAMS. Both BESTHEAD and LINEAR probes outdo the
baselines. LINEAR usually does better, but not for all roles
(e.g ORIGIN). Refer to \u00a73.2 for a longer discussion.
and TARGET, which remains the same. A possi-
ble reason could be the higher fraction of cross-
sentence gold arguments for these roles. The five
roles with lowest number of intra-sentence argu-
ments are DESTINATION (58.92%), INSTRUMENT
(62.74%), ORIGIN (68.18%), PLACE (70.48%) and
TARGET (81.53%).
", "type_str": "table", "num": null, "text": "Test accuracies using all the baselines and probe" }, "TABREF8": { "html": null, "content": "", "type_str": "table", "num": null, "text": "Accuracies on cross-sentence test examples using BESTHEAD+CSO and LINEAR+CSO. The values Acc T otal \u2192AccCross in parentheses are the total test accuracy and cross-sentence test accuracy respectively, using the simple version of the same approach i.e BEST-HEAD and LINEAR. The % of cross-sentence examples for each role are: {ORIGIN:31.82 INSTRUMENT:37.26 PARTICI-PANT:17.14 PLACE:29.52}" } } } }