{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:16:45.800999Z"
},
"title": "Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention",
"authors": [
{
"first": "Soo",
"middle": [
"Hyun"
],
"last": "Ryu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Michigan",
"location": {}
},
"email": ""
},
{
"first": "Richard",
"middle": [
"L"
],
"last": "Lewis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Michigan",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We advance a novel explanation of similaritybased interference effects in subject-verb and reflexive pronoun agreement processing, grounded in surprisal values computed from a pretrained large-scale Transformer model, GPT-2. Specifically, we show that surprisal of the verb or reflexive pronoun predicts facilitatory interference effects in ungrammatical sentences, where a distractor noun that matches in number with the verb or pronoun leads to faster reading times, despite the distractor not participating in the agreement relation. We review the human empirical evidence for such effects, including recent metaanalyses and large-scale studies. We also show that attention patterns (indexed by entropy and other measures) in the Transformer show patterns of diffuse attention in the presence of similar distractors, consistent with cue-based retrieval models of parsing. But in contrast to these models, the attentional cues and memory representations are learned entirely from the simple self-supervised task of predicting the next word.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We advance a novel explanation of similaritybased interference effects in subject-verb and reflexive pronoun agreement processing, grounded in surprisal values computed from a pretrained large-scale Transformer model, GPT-2. Specifically, we show that surprisal of the verb or reflexive pronoun predicts facilitatory interference effects in ungrammatical sentences, where a distractor noun that matches in number with the verb or pronoun leads to faster reading times, despite the distractor not participating in the agreement relation. We review the human empirical evidence for such effects, including recent metaanalyses and large-scale studies. We also show that attention patterns (indexed by entropy and other measures) in the Transformer show patterns of diffuse attention in the presence of similar distractors, consistent with cue-based retrieval models of parsing. But in contrast to these models, the attentional cues and memory representations are learned entirely from the simple self-supervised task of predicting the next word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep Neural Network (DNN) language models (Le-Cun et al., 2015; Sundermeyer et al., 2012; Vaswani et al., 2017) have recently attracted the attention of researchers interested in assessing their linguistic competence Da Costa and Chaves, 2020; Ettinger, 2020; Wilcox et al., , 2019 and potential to provide accounts of psycholinguistic phenomena in sentence processing Linzen and Baroni, 2021; Van Schijndel and Linzen, 2018; Wilcox et al., 2020) . In this paper we show how attention-based transformer models (we use a pre-trained version of GPT-2) provide the basis for a new theoretical account of facilitatory interference effects in subject-verb and reflexive agreement processing. These effects, which we review in detail below, have played an important role in psycholinguistic theory because they show that properties of noun phrases that are not the grammatical targets of agreement relations may nonetheless exert an influence on processing time at points where those agreement relations are computed.",
"cite_spans": [
{
"start": 42,
"end": 63,
"text": "(Le-Cun et al., 2015;",
"ref_id": null
},
{
"start": 64,
"end": 89,
"text": "Sundermeyer et al., 2012;",
"ref_id": "BIBREF33"
},
{
"start": 90,
"end": 111,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 217,
"end": 243,
"text": "Da Costa and Chaves, 2020;",
"ref_id": "BIBREF5"
},
{
"start": 244,
"end": 259,
"text": "Ettinger, 2020;",
"ref_id": "BIBREF7"
},
{
"start": 260,
"end": 281,
"text": "Wilcox et al., , 2019",
"ref_id": "BIBREF43"
},
{
"start": 369,
"end": 393,
"text": "Linzen and Baroni, 2021;",
"ref_id": "BIBREF24"
},
{
"start": 394,
"end": 425,
"text": "Van Schijndel and Linzen, 2018;",
"ref_id": "BIBREF35"
},
{
"start": 426,
"end": 446,
"text": "Wilcox et al., 2020)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The explanation we propose here is a novel one grounded in surprisal (Hale, 2001; Levy, 2008) , but with origins in graded attention and similaritybased interference (Van Dyke and Lewis, 2003; Lewis et al., 2006; J\u00e4ger et al., 2017) . We use surprisal as the key predictor of reading time (Levy, 2013) , and through targeted analyses of patterns of attention in the transformer, show that the model behaves in ways consistent with cue-based retrieval theories of sentence processing. The account thus provides a new integration of surprisal and similarity-based interference theories of sentence processing, adding to a growing literature of work integrating noisy memory and surprisal (Futrell et al., 2020) . In this case, the noisy representations arise from training the transformer, and interference must exert its influence on reading times through a surprisal bottleneck (Levy, 2008) .",
"cite_spans": [
{
"start": 69,
"end": 81,
"text": "(Hale, 2001;",
"ref_id": "BIBREF12"
},
{
"start": 82,
"end": 93,
"text": "Levy, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 180,
"end": 192,
"text": "Lewis, 2003;",
"ref_id": "BIBREF34"
},
{
"start": 193,
"end": 212,
"text": "Lewis et al., 2006;",
"ref_id": "BIBREF22"
},
{
"start": 213,
"end": 232,
"text": "J\u00e4ger et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 289,
"end": 301,
"text": "(Levy, 2013)",
"ref_id": "BIBREF18"
},
{
"start": 686,
"end": 708,
"text": "(Futrell et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 878,
"end": 890,
"text": "(Levy, 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows. We first provide an overview of some of key empirical work in human sentence processing concerning subject-verb and reflexive pronoun agreement. We then provide a brief overview of the GPT-2 architecture, its interesting psycholinguistic properties, and the method and metrics that we will use to examine the agreement effects. We then apply GPT-2 to the materials used in several different human reading time studies. We conclude with some theoretical reflections, identification of weaknesses, and suggestions for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One long-standing focus of work in sentence comprehension is understanding how the structure of human short-term memory might support and con-strain the incremental formation of linguistic dependencies among phrases and words (Gibson, 1998; Lewis, 1996; Lewis et al., 2006; Miller and Chomsky, 1963; Nicenboim et al., 2015) . A key property of human memory thought to shape sentence processing is similarity-based interference (Miller and Chomsky, 1963; Lewis, 1993 Lewis, , 1996 . Figure 1 shows a simple example of how such interference arises in cue-based retrieval models of sentence processing, as a function of the compatibility of retrieval targets and distractors with retrieval cues (Lewis and Vasishth, 2005; Lewis et al., 2006; Van Dyke and Lewis, 2003 ) (Corresponding sentences are from Wagers et al. 2009's Exp 4-6 shown in Table 1 ). Inhibitory interference effects occur when features of the target perfectly match the retrieval cue and features of a distractor partially matches, while facilitatory interference effects occur when the features of both target and distractor partially match the features of retrieval cue.",
"cite_spans": [
{
"start": 226,
"end": 240,
"text": "(Gibson, 1998;",
"ref_id": "BIBREF11"
},
{
"start": 241,
"end": 253,
"text": "Lewis, 1996;",
"ref_id": "BIBREF20"
},
{
"start": 254,
"end": 273,
"text": "Lewis et al., 2006;",
"ref_id": "BIBREF22"
},
{
"start": 274,
"end": 299,
"text": "Miller and Chomsky, 1963;",
"ref_id": "BIBREF26"
},
{
"start": 300,
"end": 323,
"text": "Nicenboim et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 427,
"end": 453,
"text": "(Miller and Chomsky, 1963;",
"ref_id": "BIBREF26"
},
{
"start": 454,
"end": 465,
"text": "Lewis, 1993",
"ref_id": "BIBREF19"
},
{
"start": 466,
"end": 479,
"text": "Lewis, , 1996",
"ref_id": "BIBREF20"
},
{
"start": 693,
"end": 719,
"text": "(Lewis and Vasishth, 2005;",
"ref_id": "BIBREF21"
},
{
"start": 720,
"end": 739,
"text": "Lewis et al., 2006;",
"ref_id": "BIBREF22"
},
{
"start": 740,
"end": 764,
"text": "Van Dyke and Lewis, 2003",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 482,
"end": 491,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 839,
"end": 846,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "In this study, we focus on interference effects in subject-verb number agreement and reflexive pronoun-antecedent agreement, specifically in languages where the agreement features include syntactic number which is morphologically marked on the verb or pronoun. In such cases, number is plausibly a useful retrieval cue, and it is easy to manipulate the number of distractor noun phrases to allow for carefully controlled empirical contrasts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "Interference in subject-verb agreement. Previous studies (Pearlmutter et al., 1999; Wagers et al., 2009; Dillon et al., 2013; Lago et al., 2015; J\u00e4ger et al., 2020) attest to both inhibitory interference (slower processing in the presence of an interfering distractor) and facilitatory interference (faster processing in the presence of an interfering distractor), but the existing empirical support for inhibitory interference is weak, and many studies fail to find any evidence for it (Dillon et al., 2013; Lago et al., 2015; Wagers et al., 2009) . There is stronger evidence for facilitatory effects, which arise in ungrammatical structures where the verb or pronoun fails to agree in number with the structurally correct target noun phrase, but where either an intervening or preceding distractor noun phrase does match in number. Example A. below illustrates, taken from Wagers et al. (2009) , where the subject and verb are boldfaced and the distractor noun is underlined:",
"cite_spans": [
{
"start": 57,
"end": 83,
"text": "(Pearlmutter et al., 1999;",
"ref_id": "BIBREF29"
},
{
"start": 84,
"end": 104,
"text": "Wagers et al., 2009;",
"ref_id": "BIBREF41"
},
{
"start": 105,
"end": 125,
"text": "Dillon et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 126,
"end": 144,
"text": "Lago et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 145,
"end": 164,
"text": "J\u00e4ger et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 487,
"end": 508,
"text": "(Dillon et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 509,
"end": 527,
"text": "Lago et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 528,
"end": 548,
"text": "Wagers et al., 2009)",
"ref_id": "BIBREF41"
},
{
"start": 876,
"end": 896,
"text": "Wagers et al. (2009)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "A. The slogan on the posters were designed to get attention. A Bayesian meta-analysis of agreement phenomena was recently conducted with an extensive set of studies (J\u00e4ger et al., 2017; Vasishth and Engelmann, 2021) . Their analysis of first-pass reading times from eye-tracking experiments on subjectverb number agreement is shown in Figure 1 . The evidence from the meta-analysis is consistent with a very small or nonexistent inhibitory interference effect in in the grammatical conditions, with a small but robust facilitatory interference effects in the ungrammatical conditions. Concerned that the existing experiments did not have sufficient power to detect the inhibitory effects, Nicenboim et al. (2018) ran a large scale eye-tracking study (185 participants) with materials designed to increase the inhibition effect, and did detect a 9ms effect (95% credible posterior interval 0-18ms). This represents the strongest evidence to date for inhibitory effects in grammatical agreement structures, but even this evidence indicates the effect may be near zero.",
"cite_spans": [
{
"start": 165,
"end": 185,
"text": "(J\u00e4ger et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 186,
"end": 215,
"text": "Vasishth and Engelmann, 2021)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 335,
"end": 343,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "Interference in reflexive pronoun agreement. The materials from boldfaced studies are those that we used in our GPT-2 experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "(2) non-interfering The basketball coach who trained the star player usually blamed themselves for the ...",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "The empirical record concerning facilitatory effects in reflexive agreement is mixed. Some have claimed that such effects do not arise (Sturt, 2003; Xiang et al., 2009; Dillon et al., 2013) , and that this is expected under a model in which the structural constraints from binding theory (Chomsky et al., 1982) serve to effectively filter candidates for retrieval-in short, the parser does not consider or make contact with the ungrammatical distractor noun phrases (Sturt, 2003; Dillon et al., 2013) .",
"cite_spans": [
{
"start": 135,
"end": 148,
"text": "(Sturt, 2003;",
"ref_id": "BIBREF32"
},
{
"start": 149,
"end": 168,
"text": "Xiang et al., 2009;",
"ref_id": "BIBREF46"
},
{
"start": 169,
"end": 189,
"text": "Dillon et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 288,
"end": 310,
"text": "(Chomsky et al., 1982)",
"ref_id": "BIBREF3"
},
{
"start": 466,
"end": 479,
"text": "(Sturt, 2003;",
"ref_id": "BIBREF32"
},
{
"start": 480,
"end": 500,
"text": "Dillon et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "However, a recent Bayesian meta-analysis of key experiments by Dillon et al. (2013) indicates substantially overlapping posterior estimates of facilitatory effects for subject-verb agreement and reflexive agreement (Vasishth and Engelmann, 2021) . Concerned again about under-powered studies, J\u00e4ger et al. (2020) undertook a large scale (181 participants) eye-tracking replication and did find evidence for nearly equivalent facilitatory speedups for reflexive and subject-verb agreement (Figure 3) . This result is not inconsistent with the metaanalysis, but provides stronger evidence that the facilitation effects in reflexives are real.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "Dillon et al. (2013)",
"ref_id": "BIBREF6"
},
{
"start": 215,
"end": 245,
"text": "(Vasishth and Engelmann, 2021)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 488,
"end": 498,
"text": "(Figure 3)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "We take advantage of the very broad coverage Figure 10 . Interference e ects in grammatical (a) and ungrammatical conditions (b). The figure shows the posterior means together with 95% credible intervals of the interference e ects in total fixation times. These estimates were obtained from the Bayesian analysis of the original data of Dillon et al. (2013) , and from our replication data. Separate e ect estimates for each dependency type as well as the overall e ect obtained when collapsing over dependencies are presented. The left-most line of each plot shows the range of predictions of the Lewis and Vasishth (2005) ACT-R cue-based retrieval model (see Section Deriving quantitative predictions from the Lewis and Vasishth (2005) model for details). of GPT-2 by having GPT-2 process the same set of sentence materials as human subjects in four different agreement experiments. To anticipate our key results, we find GPT-2 yields lower surprisal, i.e. facilitatory effects, in both subject-verb and reflexive pronoun conditions. Furthermore, we show that attention at the verb or pronoun is distributed to both target and distractor in just those conditions where the distractor matches the hypothesized number retrieval cue (Lin et al., 2019) . Finally, we show that the surprisal contrasts between matching and nonmatching distractors in the grammatical (inhibitory)",
"cite_spans": [
{
"start": 337,
"end": 357,
"text": "Dillon et al. (2013)",
"ref_id": "BIBREF6"
},
{
"start": 598,
"end": 623,
"text": "Lewis and Vasishth (2005)",
"ref_id": "BIBREF21"
},
{
"start": 712,
"end": 737,
"text": "Lewis and Vasishth (2005)",
"ref_id": "BIBREF21"
},
{
"start": 1232,
"end": 1250,
"text": "(Lin et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 45,
"end": 54,
"text": "Figure 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "interference conditions are essentially zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Agreement Interference Effects in Human Sentence Processing",
"sec_num": "2"
},
{
"text": "The psycholinguistic relevance of GPT-2 and its training method. GPT-2 (Generative Pre-trained Transformer-2), introduced by OpenAI in Radford et al. 2019, is a language model with a decoder-only Transformer architecture (Vaswani et al., 2017) , and has achieved state-of-the-art performance in diverse downstream tasks. GPT-2 and other large-scaled language models based on transformer architectures were trained on billions of words of text, and engineered with performance in mind, not with concern for psycholinguistic plausibility. Why then should we then take them seriously as the basis of psycholinguistic models?",
"cite_spans": [
{
"start": 221,
"end": 243,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "We believe that the new transformer-based models have three important properties that make them of psycholinguistic interest. (a) The models are among the first to serve as the basis of systems that achieve human-level performance on a range of linguistic tasks, and they directly generate a key quantity, surprisal of the next word, that we know is an important predictor of reading times in humans (Hale, 2001; Levy, 2008) . (b) Although the data requirements are currently much greater than that for human language acquisition, the models are trained on a simple task-predict the next word-that may plausibly serve as the basis of a self-supervised learning signal in human language acquisition. The representations that arise from such learning are thus psycholinguistically interesting. (c) The learned soft-attention and parallel content-based retrieval of representations of prior input are architectural properties of the GPT models that align very closely with retrieval-based models of sentence comprehension (Lewis et al., 2006) . And the structure of these psycholinguistic models was proposed as a response to the challenges of computing long-distance dependencies-the same challenge that motivated the transformer as a departure from standard recurrent architectures (Vaswani et al., 2017; Galassi et al., 2020) .",
"cite_spans": [
{
"start": 400,
"end": 412,
"text": "(Hale, 2001;",
"ref_id": "BIBREF12"
},
{
"start": 413,
"end": 424,
"text": "Levy, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 1019,
"end": 1039,
"text": "(Lewis et al., 2006)",
"ref_id": "BIBREF22"
},
{
"start": 1281,
"end": 1303,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 1304,
"end": 1325,
"text": "Galassi et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "Identifying specialized heads in GPT-2. Here we use the medium-sized GPT-2 which is constructed with 12 layers, each of which includes 12 attention heads. Previous studies have revealed that individual attention heads in Transformer models serve are at least partially specialized in function (Clark et al., 2019; Vig, 2019; Vig and Belinkov, 2019; Voita et al., 2019) . Specifically, Voita et al. (2019) found that certain attention heads are specialized for different dependency relations.",
"cite_spans": [
{
"start": 293,
"end": 313,
"text": "(Clark et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 314,
"end": 324,
"text": "Vig, 2019;",
"ref_id": "BIBREF38"
},
{
"start": 325,
"end": 348,
"text": "Vig and Belinkov, 2019;",
"ref_id": "BIBREF39"
},
{
"start": 349,
"end": 368,
"text": "Voita et al., 2019)",
"ref_id": "BIBREF40"
},
{
"start": 385,
"end": 404,
"text": "Voita et al. (2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "Following Voita et al. (2019) 's method, we identified heads that are specialized for subject-verb relations and reflexive anaphora resolution. Voita et al. 2019's method works as follows. First, sentences are parsed using CoreNLP dependency parser (Manning et al., 2014) . Then, relative string positions (e.g., one token back, two tokens back) of all instances in each syntactic dependency were counted. Considering the proportion of the most frequent relative position as the baseline, attention heads are selected as specialized for a particular dependency relation if attention is paid for the corresponding dependent at least 10% more often than the baseline. In other words, there must be some evidence that the attention head is sensitive to the dependency and not merely string position.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "Voita et al. (2019)",
"ref_id": "BIBREF40"
},
{
"start": 249,
"end": 271,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "To find attention heads responsible for the relation between subjects and verbs, we used the CoreNLP parser on 148,376 sentences from the Brown corpus and Gutenberg corpus provided via Natural Language Toolkit (NLTK) (Bird et al., 2009) , extracting 49,145 nsubj relations, which associate nominal subjects and their governors which are mostly verbs. The most frequent relative position for nsubj dependency relation is -1, which means that the nominal subjects usually come right before their governor, taking up 42% of the cases.",
"cite_spans": [
{
"start": 217,
"end": 236,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "After analyzing the attention distribution pattern using GPT-2, we obtained four syntactic heads that were found to be partly specialized for nsubj dependency relations: head4_3 (59%); head3_6 (51%); head6_0 (49%); head2_9 (49%) 1 . Although we expect that the four syntactic heads responsible for nsubj dependency relation may play distinct roles, in our analyses here we simply use the best performing head (head4_3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "The same method was implemented to find attention heads responsible for reflexive anaphora resolution. The only difference was that we used NeuralCoref (Wolf et al., 2018) to count relative position of antecedents to reflexive anaphora since the dependency parser does not associate antecedents and anaphora. Out of 2,660 sentences that includes reflexive anaphora, we extracted 510 sentences where NeuralCoref identified a single unique antecedent for the reflexive pronoun. The most fre- quent relative position for reflexive anaphora and their antecedents was -2, meaning that antecedents appear before reflexive anaphora having one word in between. The proportion of the highest relative position was 22%, requiring 24.2 % of accuracy for attention heads to be considered responsible for reflexive anaphora resolution. We found four heads whose accuracies are higher than the threshold: head1_5 (44%); head3_5 (39%); head4_3 (27%); head6_0 (25%), and we again take the best performing head (head1_5) for further analysis.",
"cite_spans": [
{
"start": 152,
"end": 171,
"text": "(Wolf et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "Metrics. We define here three metrics for our analyses: surprisal, attention entropy from syntactic heads, and attention to target. We use surprisal for making reading time predictions, but use the attention metrics to provide insight into the processing at the critical region and therefore the representations computed in the prefix before the critical region. Surprisal is thus based on the final prediction of the entire model, but the attention metrics are associated with the attention heads most specialized for our dependencies of interest. Surprisal (Hale, 2001; Levy, 2008) is defined as the negative log probability of the word given left context. Surprisal(w) = \u2212log 2 P (w|context) (1) Any use of surprisal requires adoption of some kind of language model; e.g. some past work has used probabilistic CFGs (Levy, 2008) . Here we use GPT-2, which computes after each word a probability distribution over its large lexicon that is conditioned on its internal representation of the left context.",
"cite_spans": [
{
"start": 559,
"end": 571,
"text": "(Hale, 2001;",
"ref_id": "BIBREF12"
},
{
"start": 572,
"end": 583,
"text": "Levy, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 818,
"end": 830,
"text": "(Levy, 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "Attention to target is simply the value of the soft attention vector element that corresponds to the target word position, which we denote Attn(w cue , w target ), and indicates how much attention is allocated to the target by one of the specialized attention heads (head4_3 for subject-verb and head1_5 for reflexives.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "Attention entropy is a variant of Shannon (1948) 's information entropy that we use as a measure of how sharply focused (low entropy) or diffuse (high entropy) the attention pattern is. (It may be thought of as a measure of the uncertainty about the attentional target, but because the attention values are not probabilities from which targets are sampled, this interpretation is not strictly warranted).",
"cite_spans": [
{
"start": 34,
"end": 48,
"text": "Shannon (1948)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Entropy(w i ) = i\u22121 j=1 Attn(w i , w j ) \u00d7 log 2 Attn(w i , w j )",
"eq_num": "(2)"
}
],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "where i refers to the location of the critical word, j are locations of prior words, and Attn(w i , w j ) is attention allocated to w j from w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPT-2 for Psycholinguistic Analysis",
"sec_num": "3"
},
{
"text": "To investigate whether GPT-2 may predict facilitatory interference effects in subject-verb agreement, we ran GPT-2 on materials from three studies (Dillon et al., 2013; Wagers et al., 2009) : 48 sets of sentences from Experiments 2-3 in Wagers et al. (2009) 2; 24 sets of sentences from Experiments 4-7 in Wagers et al. (2009) ; 48 sets of sentences from Dillon et al. (2013) (See Table 1 ).",
"cite_spans": [
{
"start": 147,
"end": 168,
"text": "(Dillon et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 169,
"end": 189,
"text": "Wagers et al., 2009)",
"ref_id": "BIBREF41"
},
{
"start": 306,
"end": 326,
"text": "Wagers et al. (2009)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 381,
"end": 388,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Subject-verb Agreement Experiments",
"sec_num": "4"
},
{
"text": "These three sets of sentences have in common a 2 \u00d7 2 structure with the factors grammaticality (grammatical/ungrammatical) and interference (interfering/non-interfering), as described above. Additionally, Wagers et al. (2009) 's Exp 3 also includes an additional condition, subject (singular/plural) for investigating a possible singularplural asymmetry, i.e., asking whether interference effects are equivalent for plural (for plural verbs) and singular (for singular verbs) distractors.",
"cite_spans": [
{
"start": 191,
"end": 225,
"text": "Additionally, Wagers et al. (2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subject-verb Agreement Experiments",
"sec_num": "4"
},
{
"text": "Note that sentences from Experiments 2-3 in Wagers et al. (2009) involve structures in which the distractor appears before the target, and so test effects of proactive interference. Thus the distractors are also more distant from verbs than in the other experimental materials.",
"cite_spans": [
{
"start": 44,
"end": 64,
"text": "Wagers et al. (2009)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subject-verb Agreement Experiments",
"sec_num": "4"
},
{
"text": "Results of surprisal analyses. Figure 4 shows the surprisal computed at the critical verbs in each of the experiments and in each of the four conditions separately (red dots and intervals represent means and conventional 95% confidence intervals). Surprisal matches the important qualitative pattern found in the meta-analysis of first-pass reading times: lower surprisal-facilitatory effects-are found in the ungrammatical conditions when the distractor matches the verb's number, and no inhibitory effects are found in the grammatical conditions. Furthermore, the effects are largest for the case of retroactive interference, where the distractor follows the target and immediately precedes the verb (Figure 4a ), compared to proactive inteference, where the distractor precedes the target (Figure 4c) . The exception is that no facilitatory effects were found when the verb is singular and the target subject is plural (see Figure 4d) . But the facilitatory effect in this condition was not reliably different from zero in the meta-analysis, and it mirrors a plural-singular asymmetry (or markedness effect) found in agreement attraction in production. Results of attention analyses. Our conjecture is that in the interfering conditions where the distractor matches the verb in number that the attention of the nsubj-specialized attention head head4_3 will be distributed to both the target and the distractor. It is possible to visualize exactly this pattern using a tool developed by Vig (2019) . this conjecture: Figure 6 shows two metrics across the four datasets. The interfering conditions always show the highest value of attention entropy and the lowest value of attention to target, which means that the head most specialized for subject-verb relations distributes attention more diffusely and away from the target subject. There is evidence for the expected attention effects even in the grammatical conditions, but in these conditions there is no effect of surprisal. Thus, under a theory in which similarity-based interference exerts its effects on reading time through a surprisal bottleneck (Levy, 2008) , no reading time differences are expected here-even though the underlying representations and attention patterns may reflect the interference.",
"cite_spans": [
{
"start": 1489,
"end": 1499,
"text": "Vig (2019)",
"ref_id": "BIBREF38"
},
{
"start": 2108,
"end": 2120,
"text": "(Levy, 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 4",
"ref_id": null
},
{
"start": 702,
"end": 712,
"text": "(Figure 4a",
"ref_id": null
},
{
"start": 792,
"end": 803,
"text": "(Figure 4c)",
"ref_id": null
},
{
"start": 927,
"end": 937,
"text": "Figure 4d)",
"ref_id": null
},
{
"start": 1519,
"end": 1527,
"text": "Figure 6",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Subject-verb Agreement Experiments",
"sec_num": "4"
},
{
"text": "Preliminary corpus analysis of ungrammatical subject-verb agreement sentences. One possible explanation for the observed facilitatory interference effects is that GPT-2 was exposed to ungrammatical sentences in the training data that have precisely the interference patterns of the ungrammatical sentences in our experiments. To examine such possibility, we analyzed 241 sentences randomly extracted from a Reddit corpus (Chang et al., 2020) whose subjects and verbs do not agree in number, and have either interfering or non-interfering distractors in between. The results shown in Table 2 suggest that interfering distractors occur about twice as often as non-interfering distractors in the case of singular subjects with an ungrammatical plural verb, consistent with our expectations that agreement-attraction errors in production may be evident in un-edited corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 583,
"end": 590,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subject-verb Agreement Experiments",
"sec_num": "4"
},
{
"text": "But it seems unlikely that this 2:1 ratio, which singular subj plural subj interfering 80 71 non-interfering 39 51 Table 2 : Results from a preliminary corpus analysis of patterns of ungrammatical subject-verb agreement.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Subject-verb Agreement Experiments",
"sec_num": "4"
},
{
"text": "In the key case of a singular subject and a plural verb, the number of an intervening distractor is about twice as likely to be plural (interfering) rather than singular (non-interfering) . See text for a discussion.",
"cite_spans": [
{
"start": 170,
"end": 187,
"text": "(non-interfering)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subject-verb Agreement Experiments",
"sec_num": "4"
},
{
"text": "corresponds to about a 1 bit difference in surprisal, is sufficient alone to explain the observed surprisal differences. For example, in the Wagers et al Experiment 4-6, we observed about a 3 bit difference in surprisal, a 2 bit or 4x difference in probability relative to what would be expected on the basis of the corpus counts. More extensive corpus analysis is necessary to confidently rule out this explanation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subject-verb Agreement Experiments",
"sec_num": "4"
},
{
"text": "To examine whether the prediction of GPT-2 are consistent with the null interference effects argued for by Dillon et al. (2013) , or show facilitatory interference effects as in the large scale J\u00e4ger et al. (2020) replication, we conducted an experiment using the same methodology as described above for the subject-verb experiments, but using the reflexive materials in Dillon et al. (2013) , and focusing the attention analyses on the head most specialized for reflexive anaphor resolution. Examples of the materials are shown in Table 3 .",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "Dillon et al. (2013)",
"ref_id": "BIBREF6"
},
{
"start": 371,
"end": 391,
"text": "Dillon et al. (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 532,
"end": 539,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Reflexive Agreement Experiments",
"sec_num": "5"
},
{
"text": "Results of the surprisal analyses. Summaries of the surprisal (and attention metrics) measured at Results of the attention analyses. We found little or no differences between interfering and noninterfering cases in the two attention metrics at-tention entropy and attention to target. It is possible that this is because the attention head head1_5 that we found to be partly specialized for reflexive anaphora resolution is actually not as specialized in reflexive anaphora resolution as head4_3 specialized in nsubj dependency resolution. We cannot conclude yet whether there exist heads that serve this function better (that are not detected by the method of Voita et al. (2019) ), whether GPT-2 is not reliably resolving the reflexive anaphora, or whether GPT-2 is doing so in a way that is dis- Dillon et al. (2013) , used in the GPT-2 experiment on reflexive pronoun agreement. tributed across many attention heads.",
"cite_spans": [
{
"start": 661,
"end": 680,
"text": "Voita et al. (2019)",
"ref_id": "BIBREF40"
},
{
"start": 799,
"end": 819,
"text": "Dillon et al. (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reflexive Agreement Experiments",
"sec_num": "5"
},
{
"text": "Effects of similarity-based interference have been the province of models of noisy memory rather than models of probabilistic expectations, because in standard probabilistic grammars the expectation for the agreement features of a licensor such as a verb or pronoun should not be conditioned upon the agreement features of constituents other than the target licensee. But we show here that a largescale Transformer language model, GPT-2, trained only to predict the next word, nevertheless yields surprisal values that are consistent with facilitatory interference effects due to distractor noun phrases that do not participate in the agreement relations. We also confirmed that two metrics that are easily computed from the Transformers' attention mechanism, attention entropy and attention to target, show patterns in the subject-verb experiments that are consistent with cue-based retrieval models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Directions",
"sec_num": "6"
},
{
"text": "Our results are suggestive of a possible interesting link between surprisal and noisy memory representations. The attention patterns that we have discovered must reflect similarity between the representations of the target and distractor noun phrases. This representational similarity is the source of great generalization power, but this generalization can lead to linguistic expectations that are not derived by conventional grammatical analyses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Directions",
"sec_num": "6"
},
{
"text": "One limitation of our analyses of attention is that they depend on methods for identifying specialized heads for specific dependency types. It is not clear that we understand enough about Transformer models to do this reliably. But our results suggest that for at least some dependencies, these simple attention metrics and head selection methods can yield interesting insights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Directions",
"sec_num": "6"
},
{
"text": "The approach outlined may provide an important way to combine surprisal and noisy memory accounts, maintaining a surprisal bottleneck. Using trained Transformers has the significant theoretical advantage that the memory representations, the attention/retrieval cues, and thus the predicted similarity effects are learned via a self-supervised prediction task. And so such models naturally yield experience-driven sources of noisy representations that are independent of the process noise assumed in existing memory-based models. Combining the process-and experience-based noise in a single model is an important goal for psycholinguistic theory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Directions",
"sec_num": "6"
},
{
"text": "headn_m refers to the m-th attention head in the n-th layer. Numbers in parentheses indicate accuracies of heads in paying the highest attention to the subject/antecedent by the verb/pronoun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Wagers et al. (2009)'s materials are an extended and slightly modified version ofPearlmutter et al. (1999)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Natural language processing with Python: analyzing text with the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. \" O'Reilly Media, Inc.\".",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Convokit: A toolkit for the analysis of conversations",
"authors": [
{
"first": "P",
"middle": [],
"last": "Jonathan",
"suffix": ""
},
{
"first": "Caleb",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Liye",
"middle": [],
"last": "Chiam",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Justine",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan P Chang, Caleb Chiam, Liye Fu, An- drew Wang, Justine Zhang, and Cristian Danescu- Niculescu-Mizil. 2020. Convokit: A toolkit for the analysis of conversations. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 57-60.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "What don't rnn language models learn about filler-gap dependencies?",
"authors": [
{
"first": "P",
"middle": [],
"last": "Rui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chaves",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Society for Computation in Linguistics",
"volume": "3",
"issue": "1",
"pages": "20--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui P Chaves. 2020. What don't rnn language models learn about filler-gap dependencies? Proceedings of the Society for Computation in Linguistics, 3(1):20- 30.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Some concepts and consequences of the theory of government and binding",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky et al. 1982. Some concepts and con- sequences of the theory of government and binding. MIT press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "What does bert look at? an analysis of bert's attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.04341"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does bert look at? an analysis of bert's attention. arXiv preprint arXiv:1906.04341.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Assessing the ability of transformer-based neural models to represent structurally unbounded dependencies. Proceedings of the Society for Computation in Linguistics",
"authors": [
{
"first": "Jillian",
"middle": [
"K"
],
"last": "",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Costa",
"suffix": ""
},
{
"first": "Rui P",
"middle": [],
"last": "Chaves",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "3",
"issue": "",
"pages": "189--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jillian K Da Costa and Rui P Chaves. 2020. Assessing the ability of transformer-based neural models to rep- resent structurally unbounded dependencies. Pro- ceedings of the Society for Computation in Linguis- tics, 3(1):189-198.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Contrasting intrusion profiles for agreement and anaphora: Experimental and modeling evidence",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Dillon",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mishler",
"suffix": ""
},
{
"first": "Shayne",
"middle": [],
"last": "Sloggett",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Memory and Language",
"volume": "69",
"issue": "2",
"pages": "85--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Dillon, Alan Mishler, Shayne Sloggett, and Colin Phillips. 2013. Contrasting intrusion profiles for agreement and anaphora: Experimental and model- ing evidence. Journal of Memory and Language, 69(2):85-103.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "What bert is not: Lessons from a new suite of psycholinguistic diagnostics for language models",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "34--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger. 2020. What bert is not: Lessons from a new suite of psycholinguistic diagnostics for lan- guage models. Transactions of the Association for Computational Linguistics, 8:34-48.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lossy-context surprisal: An informationtheoretic model of memory effects in sentence processing",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Roger P",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Cognitive science",
"volume": "44",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Edward Gibson, and Roger P Levy. 2020. Lossy-context surprisal: An information- theoretic model of memory effects in sentence pro- cessing. Cognitive science, 44(3):e12814.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Rnns as psycholinguistic subjects: Syntactic state and grammatical dependency",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.01329"
]
},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. Rnns as psycholinguistic sub- jects: Syntactic state and grammatical dependency. arXiv preprint arXiv:1809.01329.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Attention in natural language processing",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Galassi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lippi",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Torroni",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Neural Networks and Learning Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Galassi, Marco Lippi, and Paolo Torroni. 2020. Attention in natural language processing. IEEE Transactions on Neural Networks and Learning Sys- tems.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Linguistic complexity: Locality of syntactic dependencies",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 1998,
"venue": "Cognition",
"volume": "68",
"issue": "1",
"pages": "1--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. Cognition, 68(1):1-76.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A probabilistic earley parser as a psycholinguistic model",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
}
],
"year": 2001,
"venue": "Second Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hale. 2001. A probabilistic earley parser as a psy- cholinguistic model. In Second Meeting of the North American Chapter of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Similarity-based interference in sentence comprehension: Literature review and bayesian meta-analysis",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lena",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
},
{
"first": "Shravan",
"middle": [],
"last": "Engelmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vasishth",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Memory and Language",
"volume": "94",
"issue": "",
"pages": "316--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lena A J\u00e4ger, Felix Engelmann, and Shravan Vasishth. 2017. Similarity-based interference in sentence comprehension: Literature review and bayesian meta-analysis. Journal of Memory and Language, 94:316-339.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Interference patterns in subject-verb agreement and reflexives revisited: A large-sample study",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lena",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
},
{
"first": "Julie",
"middle": [
"A"
],
"last": "Mertzen",
"suffix": ""
},
{
"first": "Shravan",
"middle": [],
"last": "Van Dyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vasishth",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Memory and Language",
"volume": "111",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lena A J\u00e4ger, Daniela Mertzen, Julie A Van Dyke, and Shravan Vasishth. 2020. Interference patterns in subject-verb agreement and reflexives revisited: A large-sample study. Journal of Memory and Lan- guage, 111:104063.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Agreement attraction in spanish comprehension",
"authors": [
{
"first": "Sol",
"middle": [],
"last": "Lago",
"suffix": ""
},
{
"first": "Diego",
"middle": [
"E"
],
"last": "Shalom",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Sigman",
"suffix": ""
},
{
"first": "Ellen",
"middle": [
"F"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of Memory and Language",
"volume": "82",
"issue": "",
"pages": "133--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sol Lago, Diego E Shalom, Mariano Sigman, Ellen F Lau, and Colin Phillips. 2015. Agreement attraction in spanish comprehension. Journal of Memory and Language, 82:133-149.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep learning. nature",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "521",
"issue": "",
"pages": "436--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature, 521(7553):436-444.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Expectation-based syntactic comprehension",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "106",
"issue": "3",
"pages": "1126--1177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Levy. 2008. Expectation-based syntactic com- prehension. Cognition, 106(3):1126-1177.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Memory and surprisal in human sentence comprehension",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Levy. 2013. Memory and surprisal in human sentence comprehension.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An architecturally-based theory of human sentence comprehension",
"authors": [
{
"first": "Richard L Lewis ; Carnegie-Mellon Univ Pitts-Burgh Pa Dept Of Computer",
"middle": [],
"last": "Science",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard L Lewis. 1993. An architecturally-based the- ory of human sentence comprehension. Techni- cal report, CARNEGIE-MELLON UNIV PITTS- BURGH PA DEPT OF COMPUTER SCIENCE.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Interference in short-term memory: The magical number two (or three) in sentence processing",
"authors": [
{
"first": "L",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 1996,
"venue": "Journal of psycholinguistic research",
"volume": "25",
"issue": "1",
"pages": "93--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard L Lewis. 1996. Interference in short-term memory: The magical number two (or three) in sen- tence processing. Journal of psycholinguistic re- search, 25(1):93-115.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An activation-based model of sentence processing as skilled memory retrieval",
"authors": [
{
"first": "L",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Shravan",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vasishth",
"suffix": ""
}
],
"year": 2005,
"venue": "Cognitive science",
"volume": "29",
"issue": "3",
"pages": "375--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard L Lewis and Shravan Vasishth. 2005. An activation-based model of sentence processing as skilled memory retrieval. Cognitive science, 29(3):375-419.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Computational principles of working memory in sentence comprehension",
"authors": [
{
"first": "L",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Shravan",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Julie A Van",
"middle": [],
"last": "Vasishth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dyke",
"suffix": ""
}
],
"year": 2006,
"venue": "Trends in cognitive sciences",
"volume": "10",
"issue": "10",
"pages": "447--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard L Lewis, Shravan Vasishth, and Julie A Van Dyke. 2006. Computational principles of work- ing memory in sentence comprehension. Trends in cognitive sciences, 10(10):447-454.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Open sesame: Getting inside bert's linguistic knowledge",
"authors": [
{
"first": "Yongjie",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chern Tan",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "241--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside bert's linguistic knowl- edge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241-253.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Syntactic structure from deep learning. Annual Review of Linguistics",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen and Marco Baroni. 2021. Syntactic struc- ture from deep learning. Annual Review of Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Finitary models of language users",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1963,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller and Noam Chomsky. 1963. Finitary models of language users.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exploratory and confirmatory analyses in sentence processing: A case study of number interference in german",
"authors": [
{
"first": "Bruno",
"middle": [],
"last": "Nicenboim",
"suffix": ""
},
{
"first": "Shravan",
"middle": [],
"last": "Vasishth",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Engelmann",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Suckow",
"suffix": ""
}
],
"year": 2018,
"venue": "Cognitive science",
"volume": "42",
"issue": "",
"pages": "1075--1100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruno Nicenboim, Shravan Vasishth, Felix Engelmann, and Katja Suckow. 2018. Exploratory and confirma- tory analyses in sentence processing: A case study of number interference in german. Cognitive sci- ence, 42:1075-1100.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Working memory differences in long-distance dependency resolution",
"authors": [
{
"first": "Bruno",
"middle": [],
"last": "Nicenboim",
"suffix": ""
},
{
"first": "Shravan",
"middle": [],
"last": "Vasishth",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Gattei",
"suffix": ""
},
{
"first": "Mariano",
"middle": [],
"last": "Sigman",
"suffix": ""
},
{
"first": "Reinhold",
"middle": [],
"last": "Kliegl",
"suffix": ""
}
],
"year": 2015,
"venue": "Frontiers in Psychology",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruno Nicenboim, Shravan Vasishth, Carolina Gattei, Mariano Sigman, and Reinhold Kliegl. 2015. Work- ing memory differences in long-distance depen- dency resolution. Frontiers in Psychology, 6:312.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Agreement processes in sentence comprehension",
"authors": [
{
"first": "J",
"middle": [],
"last": "Neal",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"M"
],
"last": "Pearlmutter",
"suffix": ""
},
{
"first": "Kathryn",
"middle": [],
"last": "Garnsey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bock",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Memory and language",
"volume": "41",
"issue": "3",
"pages": "427--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neal J Pearlmutter, Susan M Garnsey, and Kathryn Bock. 1999. Agreement processes in sentence com- prehension. Journal of Memory and language, 41(3):427-456.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A mathematical theory of communication. The Bell system technical journal",
"authors": [
{
"first": "Claude",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "",
"volume": "27",
"issue": "",
"pages": "379--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claude E Shannon. 1948. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The time-course of the application of binding constraints in reference resolution",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Sturt",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Memory and Language",
"volume": "48",
"issue": "3",
"pages": "542--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Sturt. 2003. The time-course of the application of binding constraints in reference resolution. Jour- nal of Memory and Language, 48(3):542-562.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Lstm neural networks for language modeling",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Sundermeyer",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2012,
"venue": "Thirteenth annual conference of the international speech communication association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Sundermeyer, Ralf Schl\u00fcter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Distinguishing effects of structure and decay on attachment and repair: A cue-based parsing account of recovery from misanalyzed ambiguities",
"authors": [
{
"first": "Julie",
"middle": [
"A"
],
"last": "",
"suffix": ""
},
{
"first": "Van",
"middle": [],
"last": "Dyke",
"suffix": ""
},
{
"first": "Richard L",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Memory and Language",
"volume": "49",
"issue": "3",
"pages": "285--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie A Van Dyke and Richard L Lewis. 2003. Dis- tinguishing effects of structure and decay on attach- ment and repair: A cue-based parsing account of re- covery from misanalyzed ambiguities. Journal of Memory and Language, 49(3):285-316.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Modeling garden path effects without explicit hierarchical syntax",
"authors": [
{
"first": "Marten",
"middle": [],
"last": "Van Schijndel",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marten Van Schijndel and Tal Linzen. 2018. Model- ing garden path effects without explicit hierarchical syntax. In CogSci.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Sentence Comprehension as a Cognitive Process: A computational approach",
"authors": [
{
"first": "Shravan",
"middle": [],
"last": "Vasishth",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Engelmann",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shravan Vasishth and Felix Engelmann. 2021. Sen- tence Comprehension as a Cognitive Process: A computational approach. Cambridge University Press.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A multiscale visualization of attention in the transformer model",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "37--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Vig. 2019. A multiscale visualization of atten- tion in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics: System Demonstrations, pages 37-42.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Analyzing the structure of attention in a transformer language model",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "63--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63-76.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Fedor",
"middle": [],
"last": "Moiseev",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5797--5808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 5797-5808.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Agreement attraction in comprehension: Representations and processes",
"authors": [
{
"first": "W",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Ellen",
"middle": [
"F"
],
"last": "Wagers",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Memory and Language",
"volume": "61",
"issue": "2",
"pages": "206--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew W Wagers, Ellen F Lau, and Colin Phillips. 2009. Agreement attraction in comprehension: Rep- resentations and processes. Journal of Memory and Language, 61(2):206-237.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "On the predictive power of neural language models for human real-time comprehension behavior",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.01912"
]
},
"num": null,
"urls": [],
"raw_text": "Ethan Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger Levy. 2020. On the predictive power of neural language models for human real-time compre- hension behavior. arXiv preprint arXiv:2006.01912.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "What syntactic structures block dependencies in rnn language models",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.10431"
]
},
"num": null,
"urls": [],
"raw_text": "Ethan Wilcox, Roger Levy, and Richard Futrell. 2019. What syntactic structures block dependen- cies in rnn language models? arXiv preprint arXiv:1905.10431.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "What do rnn language models learn about filler",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.00042"
]
},
"num": null,
"urls": [],
"raw_text": "Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do rnn language mod- els learn about filler-gap dependencies? arXiv preprint arXiv:1809.00042.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Neuralcoref: Coreference resolution in spacy with neural networks",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Ravenscroft",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Rebo",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, James Ravenscroft, Julien Chaumond, and Maxwell Rebo. 2018. Neuralcoref: Coreference resolution in spacy with neural networks.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Illusory licensing effects across dependency types: Erp evidence",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Dillon",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
}
],
"year": 2009,
"venue": "Brain and Language",
"volume": "108",
"issue": "1",
"pages": "40--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Xiang, Brian Dillon, and Colin Phillips. 2009. Illusory licensing effects across dependency types: Erp evidence. Brain and Language, 108(1):40-55.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "How facilitatory and inhibitory interference effects arise in subject-verb dependency creation in cuebased retrieval parsing. The critical manipulation concerns the overlap of number feature between the distractor, target, and retrieval cue.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Example B. below shows a pair of sentences from Dillon et al. (2013) used to probe facilitatory effects in reflexive pronoun agreement (again, the target antecedent and pronoun are boldfaced and the distractor is underlined): B. (1) interfering The basketball coach who trained the star players usually blamed themselves for the ...",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Results of the meta-analysis on subject-verb number agreement fromVasishth and Engelmann (2021).",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "From J\u00e4ger et al. (2020). Posterior estimates of facilitatory interference effects in subject-verb and reflexive agreement processing in a large scale replication ofDillon et al. (2013), the original effects, and predictions from theLewis and Vasishth (2005) model.",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "(a) Wagers et al. 2009 (Exp 4-6). (b) Dillon et al. 2013 (Exp 1) (c) Wagers et al. 2009 (Exp 2-3, singular subject) (d) Wagers et al. 2009 (Exp 3, plural subject) Figure 4: The surprisal of critical verbs computed by GPT-2 on the materials in four subject-verb number agreement experiments. Each small dot is a data point from one sentence; the red dots and intervals represent means and 95% confidence intervals.",
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"text": "Figure 5shows an example visualization.Analyses of the attention entropy and attention to target metrics provide quantitative evidence forinterfering non-interfering grammatical ungrammatical An example of the attention distribution of an attention head specialized for subject-verb dependencies in the four conditions of the subject-verb agreement experiments.",
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"uris": null,
"text": "(a) Wagers et al. 2009 (Exp 4-6) (b) Dillon et al. 2013 (Exp 1) (c) Wagers et al. 2009 (Exp 3, singular subject) (d) Wagers et al. 2009 (Exp 3, plural subject) Metrics quantifying attention patterns of the attention head most specialized for subject-verb relations, computed at the verb in the subject-verb agreement experiments. reflexive anaphora are provided in Figure 7. Consistent with the large scale replication of Dillon et al. (2013) conducted by J\u00e4ger et al. (2020) (but inconsistent with the null results reported by Dillon et al), we found lower surprisal values in the ungrammatical interfering conditions, consistent with a facilitatory interference effect.",
"num": null,
"type_str": "figure"
},
"FIGREF8": {
"uris": null,
"text": "Results of the GPT-2 reflexive agreement experiment using materials fromDillon et al. (2013).",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "
Wagers 2009 | non-int | gram | The commentators who the viewer trusts ... |
Exp 2-3 | int | ungram | *The commentators who the viewer trust ... |
| non-int | ungram | *The commentator who the viewer trust ... |
| int | gram | The slogan on the poster was designed ... |
Wagers (2009) | non-int | gram | The slogan on the posters was designed ... |
Exp 4-6 | int | ungram | *The slogan on the posters were designed ... |
| non-int | ungram | *The slogan on the poster were designed ... |
| int | gram | The executive who oversaw the middle manager |
| | | apparently was dishonest ... |
| non-int | gram | The executive who oversaw the middle managers |
Dillon 2013 | | | apparently was dishonest ... |
Exp 1 agrmt | int | ungram | *The executive who oversaw the middle managers |
| | | apparently were dishonest ... |
| non-int | ungram | |
",
"num": null,
"html": null,
"text": "A set of data included for the experiment on subject-verb agreement.(Wagers et al. (2009)'s Exp3 also included sets with plural subjects in the ungrammatical conditions.)Interference Grammaticality Example sentences int gram The commentator who the viewer trusts ...",
"type_str": "table"
},
"TABREF1": {
"content": "Exp 1 reflexive | int | ungram | *The basketball coach who trained the star players |
| | | usually blamed themselves for the ... |
| non-int | ungram | * |
",
"num": null,
"html": null,
"text": "Interference Grammaticality Example sentences int gram The basketball coach who trained the star player usually blamed himself for the ... non-int gram The basketball coach who trained the star players Dillon 2013usually blamed himself for the ... The basketball coach who trained the star player usually blamed themselves for the ...",
"type_str": "table"
},
"TABREF2": {
"content": "",
"num": null,
"html": null,
"text": "Examples from",
"type_str": "table"
}
}
}
}