|
{ |
|
"paper_id": "N19-1004", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:59:25.966826Z" |
|
}, |
|
"title": "Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Futrell", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "UC Irvine", |
|
"location": {} |
|
}, |
|
"email": "rfutrell@uci.edu" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Wilcox", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Harvard University", |
|
"location": {} |
|
}, |
|
"email": "wilcoxeg@g.harvard.edu" |
|
}, |
|
{ |
|
"first": "Takashi", |
|
"middle": [], |
|
"last": "Morita", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University", |
|
"location": {} |
|
}, |
|
"email": "tmorita@alum.mit.edu" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "pqian@mit.edu" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "IBM Research, MIT-IBM Watson AI Lab", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "miguel.ballesteros@ibm.com" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "rplevy@mit.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We investigate the extent to which the behavior of neural network language models reflects incremental representations of syntactic state. To do so, we employ experimental methodologies which were originally developed in the field of psycholinguistics to study syntactic representation in the human mind. We examine neural network model behavior on sets of artificial sentences containing a variety of syntactically complex structures. These sentences not only test whether the networks have a representation of syntactic state, they also reveal the specific lexical cues that networks use to update these states. We test four models: two publicly available LSTM sequence models of English (Jozefowicz et al., 2016; Gulordava et al., 2018) trained on large datasets; an RNN Grammar (Dyer et al., 2016) trained on a small, parsed dataset; and an LSTM trained on the same small corpus as the RNNG. We find evidence for basic syntactic state representations in all models, but only the models trained on large datasets are sensitive to subtle lexical cues signalling changes in syntactic state.", |
|
"pdf_parse": { |
|
"paper_id": "N19-1004", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We investigate the extent to which the behavior of neural network language models reflects incremental representations of syntactic state. To do so, we employ experimental methodologies which were originally developed in the field of psycholinguistics to study syntactic representation in the human mind. We examine neural network model behavior on sets of artificial sentences containing a variety of syntactically complex structures. These sentences not only test whether the networks have a representation of syntactic state, they also reveal the specific lexical cues that networks use to update these states. We test four models: two publicly available LSTM sequence models of English (Jozefowicz et al., 2016; Gulordava et al., 2018) trained on large datasets; an RNN Grammar (Dyer et al., 2016) trained on a small, parsed dataset; and an LSTM trained on the same small corpus as the RNNG. We find evidence for basic syntactic state representations in all models, but only the models trained on large datasets are sensitive to subtle lexical cues signalling changes in syntactic state.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "It is now standard practice in NLP to derive sentence representations using neural sequence models of various kinds (Elman, 1990; Sutskever et al., 2014; Goldberg, 2017; Peters et al., 2018; Devlin et al., 2018) . However, we do not yet have a firm understanding of the precise content of these representations, which poses problems for interpretability, accountability, and controllability of NLP systems. More specifically, the success of neural sequence models has raised the question of whether and how these networks learn robust syntactic generalizations about natural language, which would enable robust performance even on data that differs from the peculiarities of the training set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 129, |
|
"text": "(Elman, 1990;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 153, |
|
"text": "Sutskever et al., 2014;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 154, |
|
"end": 169, |
|
"text": "Goldberg, 2017;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 190, |
|
"text": "Peters et al., 2018;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 211, |
|
"text": "Devlin et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Here we build upon recent work studying neural language models using experimental techniques that were originally developed in the field of psycholinguistics to study language processing in the human mind. The basic idea is to examine language models' behavior on targeted sentences chosen to probe particular aspects of the learned representations. This approach was introduced by Linzen et al. (2016) , followed more recently by others (Bernardy and Lappin, 2017; Enguehard et al., 2017; Gulordava et al., 2018) , who used an agreement prediction task (Bock and Miller, 1991) to study whether RNNs learn a hierarchical morphosyntactic dependency: for example, that The key to the cabinets. . . can grammatically continue with was but not with were. This dependency turns out to be learnable from a language modeling objective (Gulordava et al., 2018) . Subsequent work has extended this approach to other grammatical phenomena, with positive results for filler-gap dependencies (Chowdhury and Zamparelli, 2018; Wilcox et al., 2018) and negative results for anaphoric dependencies (Marvin and Linzen, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 382, |
|
"end": 402, |
|
"text": "Linzen et al. (2016)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 438, |
|
"end": 465, |
|
"text": "(Bernardy and Lappin, 2017;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 489, |
|
"text": "Enguehard et al., 2017;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 513, |
|
"text": "Gulordava et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 577, |
|
"text": "(Bock and Miller, 1991)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 828, |
|
"end": 852, |
|
"text": "(Gulordava et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1013, |
|
"end": 1033, |
|
"text": "Wilcox et al., 2018)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 1082, |
|
"end": 1107, |
|
"text": "(Marvin and Linzen, 2018)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we consider syntactic representations of a different kind. Previous studies have focused on relationships of dependency: one word licenses another word, which is tested by asking whether a language model favors one (grammatically licensed) form over another in a particular context. Here we focus instead on whether neural language models show evidence for incremental syntactic state representations: whether behavior of neural language models reflects the kind of generalizations that would be captured using a stack-based incremental parse state in a symbolic grammar-based model. For example, during the underlined portion of Example (1), an incremental language model should represent and maintain the knowledge that it is currently inside a subordinate clause, implying (among other things) that a full main clause must follow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As the doctor studied the textbook, the nurse walked into the office.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we use a targeted evaluation approach (Marvin and Linzen, 2018) to elicit evidence for syntactic state representations from language models. That is, we examine language model behavior on artificially constructed sentences designed to expose behavior that is crucially dependent on syntactic state representations. In particular, we study complex subordinate clauses and garden path effects (based on mainverb/reduced-relative ambiguities and NP/Z ambiguities). We ask three general questions: (1) Is there basic evidence for the representation of syntactic state? (2) What textual cues does a neural language model use to infer changes to syntactic state? (3) Do the networks maintain knowledge about syntactic state over long spans of complex text, or do the syntactic state representations degrade? Among neural language models, we study both generic sequence models (LSTMs), which have no explicit representation of syntactic structure, and an RNN Grammar (RNNG) (Dyer et al., 2016) , which explicitly calculates Penn Treebank-style context-free syntactic representations as part of the process of assigning probabilities to words. This comparison allows us to evaluate the extent to which explicit representation of syntactic structure makes models more or less sensitive to syntactic state. RNNGs have been found to outperform LSTMs not only in overall test-set perplexity (Dyer et al., 2016) , but also in modeling long-distance number agreement in for certain model configurations; our work extends this comparison to a variety of syntactic state phenomena.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 77, |
|
"text": "(Marvin and Linzen, 2018)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 981, |
|
"end": 1000, |
|
"text": "(Dyer et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1393, |
|
"end": 1412, |
|
"text": "(Dyer et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We investigate neural language model behavior primarily by studying the surprisal, or log inverse probability, that a language model assigns to each word in a sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General methods", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "S(x i ) = \u2212 log 2 p(x i |h i\u22121 ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General methods", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where x i is the current word or character, h i\u22121 is the model's hidden state before consuming x i , the probability is calculated from the network's softmax activation, and the logarithm is taken in base 2, so that surprisal is measured in bits. Surprisal is equivalent to the pointwise contribution to the language modeling loss function due to a word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General methods", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In psycholinguistics, the common practice is to study reaction times per word (for example, reading time as measured by an eyetracker), as a measure of the word-by-word difficulty of online language processing. These reading times are often taken to reflect the extent to which humans expect certain words in context, and may be generally proportional to surprisal given the comprehender's probabilistic language model (Hale, 2001; Levy, 2008; Smith and Levy, 2013; Futrell and Levy, 2017) . In this study, we take language model surprisal as the analogue of human reading time, using it to probe the neural networks' expectations about what words will follow in certain contexts. There is a long tradition linking RNN performance to human language processing (Elman, 1990; Christiansen and Chater, 1999; MacDonald and Christiansen, 2002) and grammaticality judgments (Lau et al., 2017) , and RNN surprisals are a strong predictor of human reading times (Frank and Bod, 2011; Goodkind and Bicknell, 2018) . RNNGs have also been used as models of human online language processing .", |
|
"cite_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 431, |
|
"text": "(Hale, 2001;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 443, |
|
"text": "Levy, 2008;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 465, |
|
"text": "Smith and Levy, 2013;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 466, |
|
"end": 489, |
|
"text": "Futrell and Levy, 2017)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 760, |
|
"end": 773, |
|
"text": "(Elman, 1990;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 804, |
|
"text": "Christiansen and Chater, 1999;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 805, |
|
"end": 838, |
|
"text": "MacDonald and Christiansen, 2002)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 886, |
|
"text": "(Lau et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 975, |
|
"text": "(Frank and Bod, 2011;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 976, |
|
"end": 1004, |
|
"text": "Goodkind and Bicknell, 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General methods", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In each experiment presented below, we design a set of sentences such that the word-by-word surprisal values will show evidence for syntactic state representations. The idea is that certain words will be surprising to a language model only if the model has a representation of a certain syntactic state going into the word. We analyze wordby-word surprisal profiles for these sentences using regression analysis. Except where otherwise noted, all statistics are derived from linear mixedeffects models (Baayen et al., 2008) with sumcoded fixed-effect predictors and maximal random slope structure (Barr et al., 2013) . This method lets us factor out by-item variation in surprisal and focus on the contrasts between conditions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 502, |
|
"end": 523, |
|
"text": "(Baayen et al., 2008)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 616, |
|
"text": "(Barr et al., 2013)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental methodology", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We study the behavior of four models of English: two LSTMs trained on large data, an an RNNG and an LSTM trained on matched, smaller data (the Penn Treebank). The models are summarized in Table 1 . All models are trained on a language modeling objective.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 195, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Models tested", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our first LTSM is the model presented in Jozefowicz et al. (2016) as \"BIG LSTM+CNN Inputs\", which we call \"JRNN\", which was trained on the One Billion Word Benchmark (Chelba et al., 2013) and CNN character embeddings as input. The second large LSTM is the model described in the supplementary materials of Gulordava et al. 2018, which we call \"GRNN\", trained on 90 million tokens of English Wikipedia with two hidden layers of 650 hidden units each.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 187, |
|
"text": "(Chelba et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models tested", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our RNNG is trained on syntactically labeled Penn Treebank data (Marcus et al., 1993) , using 256-dimensional word embeddings for the input layer and 256-dimensional hidden layers, and dropout probability 0.3. Next-word predictions are obtained through hierarchical softmax with 140 clusters, obtained with the greedy agglomerative clustering algorithm of Brown et al. (1992) . We estimate word surprisals using word-synchronous beam search (Stern et al., 2017; : at each word w i a beam of incremental parses is filled, the summed forward probabilities (Stolcke, 1995) of all candidates on the beam is taken as a lower bound on the prefix probability: P min (w 1...i ), and the surprisal of the i-th word in the sentence is estimated as log P min (w 1...i ) P min (w 1...i\u22121 ) . Our action beam is size 100, and our word beam is size 10. Finally, to disentangle effects of training set from model architecture, we use an LSTM trained on string data from the Penn Treebank training set, which we call TinyLSTM. For TinyLSTM we use 256dimensional word-embedding inputs and hidden layers and dropout probability 0.3, just as with the RNNG.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 85, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 356, |
|
"end": 375, |
|
"text": "Brown et al. (1992)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 441, |
|
"end": 461, |
|
"text": "(Stern et al., 2017;", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 569, |
|
"text": "(Stolcke, 1995)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models tested", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We begin by studying subordinate clauses, a key example of a construction requiring stack-like representation of syntactic state. In such constructions, as shown in Example (1), a subordinator such as \"as\" or \"when\" serves as a cue that the following clause is a subordinate clause, meaning that it must be followed by some main (matrix) clause. In an incremental language model, this knowledge must be maintained and carried forward while processing the words inside subordinate clause. A grammar-based symbolic language model (e.g., Stolcke, 1995; Manning and Carpen-ter, 2000) would maintain this knowledge by keeping track of syntactic rules representing the incomplete subordinate clause and the upcoming main clause in a stack data structure. Psycholinguistic research has clearly demonstrated that humans maintain representations of this kind in syntactic processing (Staub and Clifton, 2006; Lau et al., 2006; Levy et al., 2012 ). Here we ask whether the string completion probabilities produced by neural language models show evidence of the same knowledge.", |
|
"cite_spans": [ |
|
{ |
|
"start": 535, |
|
"end": 549, |
|
"text": "Stolcke, 1995;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 579, |
|
"text": "Manning and Carpen-ter, 2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 874, |
|
"end": 899, |
|
"text": "(Staub and Clifton, 2006;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 900, |
|
"end": 917, |
|
"text": "Lau et al., 2006;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 918, |
|
"end": 935, |
|
"text": "Levy et al., 2012", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subordinate clauses", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We can detect the knowledge of syntactic state in this case by examining whether the network licenses and requires a matrix clause following the subordinate clause. These expectations can be detected by examining surprisal differences between sentences of the form in Example (2):(2) a. As the doctor studied the textbook, the nurse walked into the office.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subordinate clauses", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "[SUBordinator, MATRIX] b. *As the doctor studied the textbook.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subordinate clauses", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "[SUB, NO-MATRIX] c. ?The doctor studied the textbook, the nurse walked into the office.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subordinate clauses", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "[NO-SUBordinator, MATRIX] d. The doctor studied the textbook.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subordinate clauses", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "[NO-SUB, NO-MATRIX] If the network licenses a matrix clause following the subordinate clause-and maintains knowledge of that licensing relationship throughout the clause, from the subordinator to the comma-then this should be manifested as lower surprisal at the matrix clause in (2-a) as compared to (2-c). We call this the matrix licensing effect: the surprisal of the condition [SUB, MATRIX] minus [NOSUB, MATRIX] , which will be negative if there is a licensing effect. If the network requires a following matrix clause, then this will be manifested as higher surprisal at the matrix clause for (2-b) compared with (2-d). We call this the no-matrix penalty effect: the surprisal of [SUB,NOMATRIX] minus [NOSUB, NOMATRIX] , which will be positive if there is a penalty. Figure 1 : Effect of subordinator absence/presence on surprisal of continuations. Red: no-matrix penalty effect. Blue: matrix licensing effect. In this and all other figures, unless otherwise noted, error bars represent 95% confidence intervals of the contrasts between conditions shown, computed from the standard error of the by-item and by-condition mean surprisals after subtracting out the by-item means (Masson and Loftus, 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 19, |
|
"text": "NO-MATRIX]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 394, |
|
"text": "[SUB, MATRIX]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 416, |
|
"text": "[NOSUB, MATRIX]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 686, |
|
"end": 700, |
|
"text": "[SUB,NOMATRIX]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 724, |
|
"text": "[NOSUB, NOMATRIX]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1182, |
|
"end": 1207, |
|
"text": "(Masson and Loftus, 2003)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 773, |
|
"end": 781, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Subordinate clauses", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We designed 23 experimental items on the pattern of (2) and calculated difference in the sum surprisal of the words in the matrix clause. 1 Figure 3 shows the matrix licensing effect (in blue) and the no-matrix penalty effect (in red), averaged across items. For all models, we see a facilitative matrix licensing effect (p < .001 for all models), smallest in TinyLSTM. However, we only find a significant no-matrix penalty for GRNN and the RNNG (p < .001 in both): the other models do not significantly penalize an ungrammatical continuation (p = .9 for JRNN; p = .5 for TinyLSTM). That is, JRNN and TinyLSTM give no indication that (2-b) is less probable than (2-c).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 148, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Subordinate clauses", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We found that all models at least partially represent the licensing relationship between a subordinate and matrix clause. However, in order to fully represent the syntactic requirements induced by a subordinator, it seems that a model needs either large amounts of data (as in GRNN) or explicit representation of syntax (as in the RNNG, as opposed to TinyLSTM).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subordinate clauses", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The foregoing results show that neural language models use the presence of a subordinator as a cue to the onset of a subordinate clause, and that they maintain knowledge that they are in a subordinate clause throughout the intervening material up to the comma. Now we probe the ability of models to maintain this knowledge over long spans of complex intervening material. To do so, we use sentences on the template of (2) and add intervening material modifying the NPs in the subordinate clause. To both of these NPs (in subject and object position), we add modifiers of increasing syntactic complexity: PPs, subject-extracted relative clauses (SRCs), and object-extracted relative clauses (ORCs), as shown in Figure 2 . We study the extent to which these modifiers weaken the language models' expectations about the upcoming matrix clause.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 710, |
|
"end": 718, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Maintenance and degradation of syntactic state", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As a summary measure of the strength of language models' expectations about an upcoming matrix clause, we collapse the two measures of the previous section into one: the matrix licensing interaction, consisting of the difference between the no-matrix penalty effect and the matrix licensing effect (the two bars in Figure 1) . A similar measure was used to detect filler-gap dependencies by Wilcox et al. (2018) . Figure 3 shows the strength of the matrix licensing interaction given sentences with various modifiers inserted. For the large LSTMs, GRNN exhibits a strong interaction when the intervening material is short and syntactically simple, and the interaction gets progressively weaker as the intervening material becomes progressively longer and more complex (p < 0.001 for subject postmodifiers and p < 0.01 object postmodifiers). The other models show less interpretable behavior.", |
|
"cite_spans": [ |
|
{ |
|
"start": 391, |
|
"end": 411, |
|
"text": "Wilcox et al. (2018)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 324, |
|
"text": "Figure 1)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 422, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Maintenance and degradation of syntactic state", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our results indicate that at least some large LSTMs, along with the RNNG, are capable of maintaining a representation of syntactic state over spans of complex intervening material. Quantified as a licensing interaction, this representation of syntactic state exhibits the most clearly understandable behavior in GRNN, which shows a graceful degradation of syntactic expectations as the complexity of intervening material increases. The representation is maintained most strongly in the RNNG, except for one particular construction (object-position SRCs).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maintenance and degradation of syntactic state", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "As the doctor in a white lab coat (PP) who was wearing a white lab coat (SRC) who the administrator had recently hired (ORC) (Subject interveners) studied the textbook about several recent advances in cancer therapy (PP) that described several recent advances in cancer therapy (SRC) that colleagues had written on cancer therapy (ORC) (Object interveners)", |
|
"cite_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 283, |
|
"text": "(SRC)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maintenance and degradation of syntactic state", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ". . . ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maintenance and degradation of syntactic state", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The major phenomenon that has been used to probe incremental syntactic representations in humans is garden path effects. Garden path effects arise from local ambiguities, where a context leads a comprehender to believe one parse is likely, but then a disambiguating word forces her to drastically revise her beliefs, resulting in high surprisal/reading time at the disambiguating word. In effect, the comprehender is \"led down the garden path\" by a locally likely but ultimately incorrect parse (Bever, 1970) . Garden-pathing in LSTMs has recently been demonstrated by van Schijndel and Linzen (2018a,b) in the context of modeling human reading times. Garden path effects allow us to detect representations of syntactic state because if a person or language model shows a garden path effect at a word, that means that the person or model had some belief about syntactic state which was disconfirmed by that word. In psycholinguistics, these effects have been used to study the question of what information determines people's beliefs about likely parses given locally ambiguous contexts: for example, whether factors such as world knowledge play a role (Ferreira and Clifton, 1986; Trueswell et al., 1994) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 495, |
|
"end": 508, |
|
"text": "(Bever, 1970)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 603, |
|
"text": "Linzen (2018a,b)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1153, |
|
"end": 1181, |
|
"text": "(Ferreira and Clifton, 1986;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1182, |
|
"end": 1205, |
|
"text": "Trueswell et al., 1994)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Garden path effects", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Here we study two major kinds of local ambiguities inducing garden path effects. For each ambiguity, we ask two main questions. First, whether the network shows the basic garden path effect, which would indicate that it had a syntactic state representation that made a disambiguating word surprising. Second, whether the network is sensitive to subtle lexical cues to syntactic structure which may modulate the size of the garden path effect: this question allows us to determine what information the network uses to determine the beginnings and endings of certain syntactic states.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Garden path effects", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The NP/Z ambiguity 2 refers to a local ambiguity in sentences of the form given in Example (3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NP/Z Ambiguity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "( 3) When a comprehender reads the underlined phrase \"the vet with his new assistant\" in (3-a), she may at first believe that this phrase is the direct object of the verb \"scratched\" inside the subordinate clause. However, upon reaching the verb \"took off\", she realizes that the underlined phrase was not in fact an object of the verb \"scratched\", rather it was the subject of a new clause, and the subordinate clause in fact ended after the verb \"scratched\". The key region of the sentence where the garden path disambiguation happenscalled the disambiguator-is the phrase \"took off\", marked in bold.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NP/Z Ambiguity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "While a garden path should obtain in (3-a), no such garden path should exist for (3-b), because a comma clearly demarcates the end of the subordinate clause. Therefore a basic garden path effect would be indicated by the difference in surprisal at the disambiguator for (3-a) minus (3-b). Furthermore, if a comprehender is sensitive to the relationship between verb argument structure and clause boundaries, then there should be no garden path in (3-c), because the verb \"struggled\" is INTRANSITIVE: it cannot take an object in English, so an incremental parser should never be misled into believing that \"the vet...\" is its object. This lexical information about syntactic structure is subtle enough that there has been controversy about whether even humans are sensitive to it in online processing (Staub, 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 800, |
|
"end": 813, |
|
"text": "(Staub, 2007)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NP/Z Ambiguity", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We tested whether neural language models would show the basic garden path effect and if this ef- fect would be modulated by verb transitivity. We constructed 32 items based of the same structure as (3), based on materials from Staub (2007) , manipulating the transitivity of the embedded verb (\"scratched\" vs. \"struggled\"), and the presence of a disambiguating comma at the end of the subordinate clause. An NP/Z garden path effect would show up as increased surprisal at the main verb \"took off\" in the absence of a comma. If the networks use the transitivity of the embedded verb as a cue to clause structure, and maintain that information over the span of six words between the embedded verb and the main verb, then there should be a garden path effect for the transitive verb, but not for the intransitive verb. More generally we would expect a stronger garden path given the transitive verb than given the intransitive verb. Figure 4 shows the mean surprisals at the disambiguator for all four models, for both transitive and intransitive embedded verbs. The overall per-region surprisals, averaged over words in each region, are shown in Figure 5 . We see that a garden path effect exists in all models (though very small in TinyLSTM): all models show significantly higher surprisal at the main verb when the disambiguating comma is absent (p < .001 for all models). However, only the large LSTMs appear to be sensitive to the transitivity of the em- Embedded verb transitivity Garden path effect (bits) Figure 4 : Average garden path effect (surprisal at disambiguator in NO-COMMA condition minus COMMA condition) by model and embedded verb transitivity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 239, |
|
"text": "Staub (2007)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 930, |
|
"end": 938, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1144, |
|
"end": 1152, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1510, |
|
"end": 1518, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "NP/Z Garden Path Effect", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "bedded verb, showing a smaller garden path effect for intransitive verbs. Statistically, there is a significant interaction of comma presence and verb transitivity only in GRNN and JRNN (GRNN: p < .01; JRNN: p < .001; RNNG: p = .3, TinyL-STM: p = .3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NP/Z Garden Path Effect", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "All models show NP/Z garden path effects, indicating that they are sensitive to some cues indicat- ing end-of-clause boundaries. However, only the large LSTMs appear to use verb argument structure information as a cue to these boundaries. The results suggest that very large amounts of data may be necessary for current neural models to discover such fine-grained dependencies between syntactic properties of verbs and sentence structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NP/Z Garden Path Effect", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "We can probe the maintenance and degradation of syntactic state information by manipulating the length of the intervening material between the onset of the local ambiguity and the disambiguator in examples such as (3). The question is whether the networks maintain the knowledge, while processing the intervening material, that the intervening noun phrase is probably the object of the embedded verb inside a subordinate clause, or whether they gradually lose track of this information. To study this question we used materials on the pattern of (4): these materials manipulate the length of the intervening material (underlined) while holding constant the distance between the subordinator (\"As\") and the disambiguator (grew).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maintenance and degradation of state", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "(4)a. As the author studying Babylon in ancient times wrote the book grew. [SHORT, NO-COMMA] b. As the author studying Babylon in ancient times wrote, the book grew. [SHORT, COMMA] c. As the author wrote the book describing Babylon in ancient times grew. [LONG, NO-COMMA] d. As the author wrote, the book describing Babylon in ancient times grew. [LONG, COMMA] If neural language models show degradation of syntactic state, then the garden path effect (measured as the difference in surprisal between the COMMA and NO-COMMA conditions at the disambiguator) will be smaller for the LONG conditions. We tested 32 sentences of the form in (4), based on materials from Tabor and Hutchins (2004) . The garden path effect sizes are shown in Figure 6 . We find a significant garden effect in all models in the SHORT condition (p < .001 in JRNN and GRNN; p < .01 in the RNNG and p = .03 in TinyLSTM). In the long condition, we find the garden path effect in all models except TinyLSTM: (p < .001 in JRNN; p < .01 in GRNN; p = .02 in the RNNG; and p = .2 in TinyLSTM). The crucial interaction between length and comma presence (indicating that syntactic state degrades) is significant in GRNN (p < .01) and TinyLSTM (p < .001) but not JRNN (p = .7) nor the RNNG (p = .6). The pattern is reminiscent of the results on degradation of state information about subordinate clauses in Section 3, where GRNN and TinyLSTM showed the clearest evidence of degradation. Length of ambiguous region Garden path effect (bits) Figure 6 : Average garden path effect by model and length of ambiguous region.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 92, |
|
"text": "[SHORT, NO-COMMA]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 166, |
|
"end": 180, |
|
"text": "[SHORT, COMMA]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 271, |
|
"text": "[LONG, NO-COMMA]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 360, |
|
"text": "[LONG, COMMA]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 665, |
|
"end": 690, |
|
"text": "Tabor and Hutchins (2004)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 735, |
|
"end": 743, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1503, |
|
"end": 1511, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Maintenance and degradation of state", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "Note that the pattern found here is the opposite of the pattern of human reading times. Humans appear to show \"digging-in\" effects: the longer the span of time between the introduction of a local ambiguity and its resolution, the larger the garden path effect (Tabor and Hutchins, 2004; Levy et al., 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 286, |
|
"text": "(Tabor and Hutchins, 2004;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 305, |
|
"text": "Levy et al., 2009)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maintenance and degradation of state", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "Next we turn to garden path effects induced by the classic Main Verb/Reduced Relative (MV/RR) ambiguity, in which a word is locally ambiguous between being the main verb of a sentence or introducing a reduced relative clause (reduced RC: a relative clause with no explicit complementizer, headed by a passive-participle verb). That ambiguity can be maintained over a long stretch of material:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "(5)a. The woman brought the sandwich from the kitchen tripped on the carpet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "[REDUCED, AMBIGuous] b. The woman who was brought the sandwich from the kitchen tripped on the carpet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "[UNREDUCED, AMBIG] c. The woman given the sandwich from the kitchen tripped on the carpet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "[REDUCED, UNAMBIGuous] d. The woman who was given the sandwich from the kitchen tripped on the carpet. [UNREDUCED, UNAMBIG] In Example (5-a), the verb \"brought\" is initially analyzed as a main verb phrase, but upon Garden path effect (bits) Figure 7 : Garden path effect size for MV/RR ambiguity by model and verb-form ambiguity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 123, |
|
"text": "[UNREDUCED, UNAMBIG]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 249, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "reaching the verb \"tripped\"-the disambiguator in this case-the reader must re-analyze it as an RC. The garden path should be eliminated in sentences such as (5-b), the UNREDUCED condition, where the words \"who was\" clarify that the verb \"brought\" is part of an RC, rather than the main verb of the sentence. Therefore we quantify the garden path effect as the surprisal at the disambiguator for the REDUCED minus UNREDUCED conditions. There is another possible cue that the initial verb is the head of an RC: the morphological form of the verb. In examples such as (5-c), the the verb \"given\" is unambiguously in its past-participle form, indicating that it cannot be the main verb of the sentence. If a language model is sensitive to morphological cues to syntactic structure, then it should either not show a garden path effect in this UNAMBIGuous condition, or it should show a reduced garden path effect.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We constructed 29 experimental items following the template of (5). Figure 7 shows the garden path effect sizes by model and verb-form ambiguity. All networks show the basic garden path effect (p < .001 in JRNN, GRNN, and RNNG; p < 0.01 in TinyLSTM). However, the garden path effect in TinyLSTM is much smaller than the other models: RC reduction causes an additional .3 bits of surprisal at the disambiguating verb, as compared to 2.8 bits in the RNNG, 1.9 in JRNN, and 3.6 in GRNN (TinyLSTM's garden path effect is significantly smaller than each other model at p < 0.001).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 76, |
|
"text": "Figure 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "If the network is using the morphological form Table 2 : Summary of results by model and phenomenon. The first check mark indicates basic evidence of syntactic state representation. The second check mark indicates the ability to capture more fine-grained phenomena: for subordination, the no-matrix penalty effect; for the NP/Z garden path, the effect of verb transitivity; and for the MV/RR garden path, the effect of verb morphology.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 54, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "of the verb as a cue to syntactic structure, then it should show the garden path effect more strongly in the AMBIG condition than the UNAMBIG condition. The large language models and the RNNG do show this pattern: at the critical main-clause verb, surprisal is superadditively highest in the reduced ambiguous condition (the dotted blue line; a positive interaction between the reduced and ambiguous conditions is significant in the three models at p < 0.001). However, TinyLSTM does not show evidence for superadditive surprisal for the ambiguous verbform and the reduced RC (p = .45).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The three large LSTMs and the RNNG replicate the key human-like garden-path disambiguation effect due to to ambiguity in verb form. But strikingly, even when the participial verbform is unambiguous, there is still a significant garden path effect in all models (p < 0.01 in all models except TinyLSTM, where p = .08). Apparently, these networks treat an unambiguous passive-participial verb as only a noisy cue to the presence of an RC.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Main Verb/Reduced Relative Ambiguity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In all models studied, we found clear evidence of basic incremental state syntactic representation. However, models varied in how well they fully captured the effects of such state and the potentially subtle lexical cues indicating the beginnings and endings of such states: only the large LSTMs could sometimes reliably infer clause boundaries from verb argument structure (Section 4.1) and morphological verb-form (Section 4.2), and only GRNN and the RNNG fully captured the proper behavior of subordinate clauses. The results are summarized in Table 2 . We suggest that representation of course-grained syntactic structure requires either syntactic supervision or large data, while exploiting fine-grained lexical cues to structure requires large data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 554, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "General Discussion and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "More generally, we believe that the psycholinguistic methodology employed in this paper provides a valuable lens on the internal representations of black-box systems, and can form the basis for more systematic tests of the linguistic competence of NLP systems. We make all experimental items, results, and analysis scripts available online at github.com/langprocgroup/nn_ syntactic_state.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Discussion and Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Note that it would not be sufficient to look at surprisal only at the punctuation token, because the comma could indicate the beginning of a conjoined NP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For Noun Phrase/Zero ambiguity. At first the embedded verb appears to take an NP object, but later it turns out that it was a zero (null) object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Mixed-effects modeling with crossed random effects for subjects and items", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Haraald Baayen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Davidson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bates", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "59", |
|
"issue": "4", |
|
"pages": "390--412", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Haraald Baayen, D.J. Davidson, and D.M. Bates. 2008. Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4):390-412.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Random effects structure for confirmatory hypothesis testing: Keep it maximal", |
|
"authors": [ |
|
{ |
|
"first": "Dale", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Barr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Scheepers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harry", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Tily", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "68", |
|
"issue": "3", |
|
"pages": "255--278", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dale J. Barr, Roger P. Levy, Christoph Scheepers, and Harry J Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3):255-278.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Using deep neural networks to learn syntactic agreement. Linguistic Issues in Language Technology", |
|
"authors": [ |
|
{ |
|
"first": "Jean-", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Bernardy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shalom", |
|
"middle": [], |
|
"last": "Lappin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "1--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Philippe Bernardy and Shalom Lappin. 2017. Us- ing deep neural networks to learn syntactic agree- ment. Linguistic Issues in Language Technology, 15:1-15.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The cognitive basis for linguistic structures", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "Cognition and the Development of Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas G. Bever. 1970. The cognitive basis for lin- guistic structures. In J. R. Hayes, editor, Cogni- tion and the Development of Language. Wiley, New York.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Broken agreement", |
|
"authors": [ |
|
{ |
|
"first": "Kathryn", |
|
"middle": [], |
|
"last": "Bock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Cognitive Psychology", |
|
"volume": "23", |
|
"issue": "1", |
|
"pages": "45--93", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kathryn Bock and Carol A Miller. 1991. Broken agree- ment. Cognitive Psychology, 23(1):45-93.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Class-based n-gram models of natural language", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Desouza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenifer", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Comput. Linguist", |
|
"volume": "18", |
|
"issue": "4", |
|
"pages": "467--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mer- cer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural lan- guage. Comput. Linguist., 18(4):467-479.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "One billion word benchmark for measuring progress in statistical language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Ciprian", |
|
"middle": [], |
|
"last": "Chelba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Ge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1312.3005" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "RNN simulations of grammaticality judgments on long-distance dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Absar", |
|
"middle": [], |
|
"last": "Shammur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Chowdhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zamparelli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--144", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammaticality judg- ments on long-distance dependencies. In Proceed- ings of the 27th International Conference on Com- putational Linguistics, pages 133-144.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Toward a connectionist model of recursion in human linguistic performance", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Morten", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Christiansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chater", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Cognitive Science", |
|
"volume": "23", |
|
"issue": "2", |
|
"pages": "157--205", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Morten H. Christiansen and Nick Chater. 1999. To- ward a connectionist model of recursion in hu- man linguistic performance. Cognitive Science, 23(2):157-205.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Recurrent neural network grammars", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "199--209", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural net- work grammars. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 199-209.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Finding structure in time", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Elman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Cognitive Science", |
|
"volume": "14", |
|
"issue": "2", |
|
"pages": "179--211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J.L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Exploring the syntactic abilities of RNNs with multi-task learning", |
|
"authors": [ |
|
{ |
|
"first": "Emile", |
|
"middle": [], |
|
"last": "Enguehard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.03542" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emile Enguehard, Yoav Goldberg, and Tal Linzen. 2017. Exploring the syntactic abilities of RNNs with multi-task learning. arXiv preprint arXiv:1706.03542.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The independence of syntactic processing", |
|
"authors": [ |
|
{ |
|
"first": "Fernanda", |
|
"middle": [], |
|
"last": "Ferreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Clifton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "25", |
|
"issue": "3", |
|
"pages": "348--368", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fernanda Ferreira and Charles Clifton. 1986. The inde- pendence of syntactic processing. Journal of Mem- ory and Language, 25(3):348-368.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Insensitivity of the human sentence-processing system to hierarchical structure", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rens", |
|
"middle": [], |
|
"last": "Bod", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Psychological Science", |
|
"volume": "22", |
|
"issue": "6", |
|
"pages": "829--834", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan L. Frank and Rens Bod. 2011. Insensitivity of the human sentence-processing system to hierar- chical structure. Psychological Science, 22(6):829- 834.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Noisycontext surprisal as a human sentence processing cost model", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Futrell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "688--698", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Futrell and Roger Levy. 2017. Noisy- context surprisal as a human sentence processing cost model. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 688-698, Valencia, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Neural network methods for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Synthesis Lectures on Human Language Technologies", |
|
"volume": "10", |
|
"issue": "1", |
|
"pages": "1--309", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Goldberg. 2017. Neural network methods for nat- ural language processing. Synthesis Lectures on Hu- man Language Technologies, 10(1):1-309.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Predictive power of word surprisal for reading times is a linear function of language model quality", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Goodkind", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klinton", |
|
"middle": [], |
|
"last": "Bicknell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Goodkind and Klinton Bicknell. 2018. Predic- tive power of word surprisal for reading times is a linear function of language model quality. In Pro- ceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), pages 10-18, Salt Lake City, UT. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Colorless green recurrent networks dream hierarchically", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Gulordava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Gulordava, P. Bojanowski, E. Grave, T. Linzen, and M. Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Finding syntax in human encephalography with beam search", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Brennan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan R Brennan. 2018. Finding syntax in hu- man encephalography with beam search. In Pro- ceedings of the 56th Annual Meeting of the Associ- ation for Computational Linguistics (Long Papers), Melbourne, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A probabilistic Earley parser as a psycholinguistic model", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics and Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John T. Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the Sec- ond Meeting of the North American Chapter of the Association for Computational Linguistics and Lan- guage Technologies, pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Exploring the limits of language modeling", |
|
"authors": [ |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "Jozefowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the lim- its of language modeling. arXiv, 1602.02410.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better", |
|
"authors": [ |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1426--1436", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yo- gatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1426-1436.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The role of structural prediction in rapid syntactic analysis", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [], |
|
"last": "Stroud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Brain & Language", |
|
"volume": "98", |
|
"issue": "", |
|
"pages": "74--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Lau, Clare Stroud, Silke Plesch, and Colin Phillips. 2006. The role of structural prediction in rapid syntactic analysis. Brain & Language, 98:74- 88.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Expectation-based syntactic comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Cognition", |
|
"volume": "106", |
|
"issue": "3", |
|
"pages": "1126--1177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roger Levy. 2008. Expectation-based syntactic com- prehension. Cognition, 106(3):1126-1177.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The processing of extraposed structures in English", |
|
"authors": [ |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evelina", |
|
"middle": [], |
|
"last": "Fedorenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mara", |
|
"middle": [], |
|
"last": "Breen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Gibson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Cognition", |
|
"volume": "122", |
|
"issue": "1", |
|
"pages": "12--36", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.cognition.2011.07.012" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roger Levy, Evelina Fedorenko, Mara Breen, and Ted Gibson. 2012. The processing of extraposed struc- tures in English. Cognition, 122(1):12-36.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Modeling the effects of memory on human online sentence processing with particle filters", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Roger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florencia", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas L", |
|
"middle": [], |
|
"last": "Reali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "937--944", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roger P Levy, Florencia Reali, and Thomas L Griffiths. 2009. Modeling the effects of memory on human online sentence processing with particle filters. In Advances in neural information processing systems, pages 937-944.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuel", |
|
"middle": [], |
|
"last": "Dupoux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "521--535", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521- 535.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan", |
|
"authors": [ |
|
{ |
|
"first": "Maryellen", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Macdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morten", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Christiansen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Psychological Review", |
|
"volume": "109", |
|
"issue": "1", |
|
"pages": "35--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maryellen C. MacDonald and Morten H. Christiansen. 2002. Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109(1):35-54.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Probabilistic parsing using left corner language models", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Carpenter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Advances in probabilistic and other parsing technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "105--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D Manning and Bob Carpenter. 2000. Probabilistic parsing using left corner language models. In Advances in probabilistic and other parsing technologies, pages 105-124. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Building a large annotated corpus of english: The penn treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Comput. Linguist", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Comput. Lin- guist., 19(2):313-330.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Targeted syntactic evaluation of language models", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Marvin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1192--1202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Using confidence intervals for graphically based data interpretation", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Masson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Loftus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Canadian Journal of Experimental Psychology/Revue canadienne de psychologie exp\u00e9rimentale", |
|
"volume": "57", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael EJ Masson and Geoffrey R Loftus. 2003. Us- ing confidence intervals for graphically based data interpretation. Canadian Journal of Experimen- tal Psychology/Revue canadienne de psychologie exp\u00e9rimentale, 57(3):203.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Modeling garden path effects without explicit hierarchical syntax", |
|
"authors": [ |
|
{ |
|
"first": "Marten", |
|
"middle": [], |
|
"last": "Van Schijndel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Cognitive Science Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marten van Schijndel and Tal Linzen. 2018a. Model- ing garden path effects without explicit hierarchical syntax. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A neural model of adaptation in reading", |
|
"authors": [ |
|
{ |
|
"first": "Marten", |
|
"middle": [], |
|
"last": "Van Schijndel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tal", |
|
"middle": [], |
|
"last": "Linzen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marten van Schijndel and Tal Linzen. 2018b. A neural model of adaptation in reading. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "The effect of word predictability on reading time is logarithmic", |
|
"authors": [ |
|
{ |
|
"first": "Nathaniel", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Cognition", |
|
"volume": "128", |
|
"issue": "3", |
|
"pages": "302--319", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathaniel J. Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302-319.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "The parser doesn't ignore intransitivity, after all", |
|
"authors": [ |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Staub", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition", |
|
"volume": "33", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adrian Staub. 2007. The parser doesn't ignore intran- sitivity, after all. Journal of Experimental Psychol- ogy: Learning, Memory, and Cognition, 33(3):550.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Syntactic prediction in language comprehension: Evidence from either", |
|
"authors": [ |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Staub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Clifton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of Experimental Psychology: Learning, Memory, & Cognition", |
|
"volume": "32", |
|
"issue": "2", |
|
"pages": "425--436", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adrian Staub and Charles Clifton. 2006. Syntactic pre- diction in language comprehension: Evidence from either . . . or. Journal of Experimental Psychology: Learning, Memory, & Cognition, 32(2):425-436.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Effective inference for generative neural parsing", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Stern", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Fried", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1695--1700", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1178" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell Stern, Daniel Fried, and Dan Klein. 2017. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 1695-1700. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "An efficient probabilistic context-free parsing algorithm that computes prefix probabilities", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Computational Linguistics", |
|
"volume": "21", |
|
"issue": "2", |
|
"pages": "165--201", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Stolcke. 1995. An efficient probabilis- tic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165-201.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems, pages 3104-3112.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Evidence for self-organized sentence processing: Digging-in effects", |
|
"authors": [ |
|
{ |
|
"first": "Whitney", |
|
"middle": [], |
|
"last": "Tabor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Hutchins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Journal of Experimental Psychology: Learning", |
|
"volume": "30", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Whitney Tabor and Sean Hutchins. 2004. Evidence for self-organized sentence processing: Digging-in ef- fects. Journal of Experimental Psychology: Learn- ing, Memory, and Cognition, 30(2):431.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Semantic influences on parsing: Use of thematic role information in syntactic ambiguity resolution", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Trueswell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tanenhaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Garnsey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Journal of Memory and Language", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "285--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John C. Trueswell, Michael K. Tanenhaus, and S. Gar- nsey. 1994. Semantic influences on parsing: Use of thematic role information in syntactic ambiguity res- olution. Journal of Memory and Language, 33:285- 318.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "What do rnn language models learn about filler-gap dependencies?", |
|
"authors": [ |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Wilcox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takashi", |
|
"middle": [], |
|
"last": "Morita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Futrell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "211--221", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do rnn language mod- els learn about filler-gap dependencies? In Proceed- ings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 211-221. Association for Computa- tional Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "present \u2212 sub. absent) surprisal difference", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Scheme for lengthening the subordinate clause in Section 3.1.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Size of matrix clause licensing interaction (see text) given various intervening elements in the subordinate clause. Note that the heatmaps are on different scales across models.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF4": { |
|
"uris": null, |
|
"text": "Region-by-region surprisal values for NP/Z garden path materials. Surprisal values are averaged across items and across words in regions. The critical region where the garden path effect is visible is the verb \"took off\".", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "Models tested, by architecture, training data, and training data size." |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>NOCOMMA]</td></tr><tr><td>b. When the dog scratched, the vet with his new</td></tr><tr><td>assistant took off the muzzle. [TRANSITIVE,</td></tr><tr><td>COMMA]</td></tr><tr><td>c. When the dog struggled the vet with</td></tr><tr><td>his new assistant took off the muzzle.</td></tr><tr><td>[INTRANSITIVE, NOCOMMA]</td></tr><tr><td>d. When the dog struggled, the vet with</td></tr><tr><td>his new assistant took off the muzzle.</td></tr><tr><td>[INTRANSITIVE, COMMA]</td></tr></table>", |
|
"html": null, |
|
"text": "a. When the dog scratched the vet with his new assistant took off the muzzle. [TRANSITIVE," |
|
} |
|
} |
|
} |
|
} |