{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:16:39.529283Z" }, "title": "Dependency Locality and Neural Surprisal as Predictors of Processing Difficulty: Evidence from Reading Times", "authors": [ { "first": "Neil", "middle": [], "last": "Rathi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Palo Alto High School", "location": {} }, "email": "neilrathi@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper compares two influential theories of processing difficulty: Gibson (2000)'s Dependency Locality Theory (DLT) and Hale (2001)'s Surprisal Theory. While prior work has aimed to compare DLT and Surprisal Theory (see Demberg and Keller, 2008), they have not yet been compared using more modern and powerful methods for estimating surprisal and DLT integration cost. I compare estimated surprisal values from two models, an RNN and a Transformer neural network, as well as DLT integration cost from a hand-parsed treebank, to reading times from the Dundee Corpus. The results for integration cost corroborate those of Demberg and Keller (2008), finding that it is a negative predictor of reading times overall and a strong positive predictor for nouns, but contrast with their observations for surprisal, finding strong evidence for lexicalized surprisal as a predictor of reading times. Ultimately, I conclude that a broad-coverage model must integrate both theories in order to most accurately predict processing difficulty.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper compares two influential theories of processing difficulty: Gibson (2000)'s Dependency Locality Theory (DLT) and Hale (2001)'s Surprisal Theory. While prior work has aimed to compare DLT and Surprisal Theory (see Demberg and Keller, 2008), they have not yet been compared using more modern and powerful methods for estimating surprisal and DLT integration cost. I compare estimated surprisal values from two models, an RNN and a Transformer neural network, as well as DLT integration cost from a hand-parsed treebank, to reading times from the Dundee Corpus. The results for integration cost corroborate those of Demberg and Keller (2008), finding that it is a negative predictor of reading times overall and a strong positive predictor for nouns, but contrast with their observations for surprisal, finding strong evidence for lexicalized surprisal as a predictor of reading times. Ultimately, I conclude that a broad-coverage model must integrate both theories in order to most accurately predict processing difficulty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Computational theories of language processing difficulty typically argue for either a memory or expectation-based approach (Boston et al., 2011) . Memory based models (eg. Gibson, 1998 Gibson, , 2000 Lewis and Vasishth, 2005) focus on the idea that resources are allocated for integrating, storing, and retrieving linguistic input. On the other hand, expectation-based models (eg. Hale, 2001; Jurafsky, 1996) propose that resources are proportionally devoted to maintaining different potential representations, leading to an expectation-based view. (Levy, 2008 (Levy, , 2013 Smith and Levy, 2013) .", "cite_spans": [ { "start": 123, "end": 144, "text": "(Boston et al., 2011)", "ref_id": "BIBREF3" }, { "start": 172, "end": 184, "text": "Gibson, 1998", "ref_id": "BIBREF7" }, { "start": 185, "end": 199, "text": "Gibson, , 2000", "ref_id": "BIBREF8" }, { "start": 200, "end": 225, "text": "Lewis and Vasishth, 2005)", "ref_id": "BIBREF18" }, { "start": 376, "end": 392, "text": "(eg. Hale, 2001;", "ref_id": null }, { "start": 393, "end": 408, "text": "Jurafsky, 1996)", "ref_id": "BIBREF13" }, { "start": 549, "end": 560, "text": "(Levy, 2008", "ref_id": "BIBREF16" }, { "start": 561, "end": 574, "text": "(Levy, , 2013", "ref_id": "BIBREF17" }, { "start": 575, "end": 596, "text": "Smith and Levy, 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here, I focus on one representative theory from each group. The first is the Dependency Locality Theory, or DLT, which was initially proposed by Gibson (2000) . The DLT quantifies the processing difficulty, or integration cost (IC) of discourse ref-erents (i.e. nouns and finite verbs), as the number of intervening nouns and verbs between a word and its preceding head or dependent, plus an additional cost of 1. Thus, the IC is always incurred at the second word in the dependency relation in linear order. This is shown in Figure 1 . Note that IC only assigns a non-zero cost to discourse referents.", "cite_spans": [ { "start": 145, "end": 158, "text": "Gibson (2000)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 526, "end": 534, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Meanwhile, Hale (2001) and Levy (2008) 's Surprisal Theory formulates the processing difficulty of a word w n in context C = w 1 . . . w n\u22121 to be its information-theoretic surprisal, given by", "cite_spans": [ { "start": 11, "end": 22, "text": "Hale (2001)", "ref_id": "BIBREF10" }, { "start": 27, "end": 38, "text": "Levy (2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "difficulty(w n ) \u221d \u2212 log 2 P (w n | C)", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "so that words that are more likely in context will then be assigned lower processing difficulties. Some work has attempted to compare DLT and surprisal as competing predictors of processing difficulty. Most notably, Demberg and Keller (2008) compared processing difficulties from DLT and surprisal to the Dundee Corpus (Kennedy et al., 2003) , a large corpus of eye-tracking data. Specifically, they examined lexicalized surprisal (where the model assigned probabilities to the words themselves), unlexicalized surprisal (where the model only had access to parts of speech), and integration cost. They found that unlexicalized surprisal was a strong predictor of reading times, while IC and lexicalized surprisal were weak predictors. They also observed that IC was a strong positive predictor of reading times for nouns, and found little correlation between IC and surprisal.", "cite_spans": [ { "start": 216, "end": 241, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF4" }, { "start": 319, "end": 341, "text": "(Kennedy et al., 2003)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Notably, however, Demberg and Keller's study relied on older methods of calculating surprisal, using a probabilistic context free grammar (PCFG). Other similar work (eg. Smith and Levy, 2013) has used n-gram models, which do not account for structural probabilities. Computational language models (LMs) such as n-grams and PCFGs are suboptimal for estimating the probabilities of words in context compared to humans.", "cite_spans": [ { "start": 170, "end": 191, "text": "Smith and Levy, 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The reporter who the senator attacked t admitted the error IC 0 1 0 0 1 3 3 0 1 However, recent work in neural network language modeling has shown that recurrent neural networks (RNNs) and Transformers are capable not only of learning word sequences, but also underlying syntactic structure (Futrell et al., 2019; Gulordava et al., 2018; Hewitt and Manning, 2019; Manning et al., 2020) . This makes them suited for more accurate estimations of surprisal.", "cite_spans": [ { "start": 291, "end": 313, "text": "(Futrell et al., 2019;", "ref_id": "BIBREF6" }, { "start": 314, "end": 337, "text": "Gulordava et al., 2018;", "ref_id": "BIBREF9" }, { "start": 338, "end": 363, "text": "Hewitt and Manning, 2019;", "ref_id": "BIBREF11" }, { "start": 364, "end": 385, "text": "Manning et al., 2020)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, I examine the correlation between reading times, DLT integration cost, and surprisal. Specifically, I compare results from a manually parsed treebank for IC and two neural LMs for surprisal, to eye-tracking times sourced from the Dundee Corpus. I additionally examine the correlation between IC and surprisal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The method in this study is similar to that of prior work on empirically testing theories of sentence processing (eg. Demberg and Keller, 2008; Smith and Levy, 2013; Wilcox et al., 2020) , using reading time data in order to estimate processing difficulty.", "cite_spans": [ { "start": 118, "end": 143, "text": "Demberg and Keller, 2008;", "ref_id": "BIBREF4" }, { "start": 144, "end": 165, "text": "Smith and Levy, 2013;", "ref_id": "BIBREF24" }, { "start": 166, "end": 186, "text": "Wilcox et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "2" }, { "text": "Specifically, I used a large corpus of eye-tracking data, the Dundee Corpus (Kennedy et al., 2003) . The corpus consists of a large set of English data taken from the Independent newspaper. Ten English speaking participants read selections from this data, comprised of 20 unique texts, and their reading times were recorded. The final corpus contained 515,020 data points.", "cite_spans": [ { "start": 76, "end": 98, "text": "(Kennedy et al., 2003)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": "2.1" }, { "text": "As with other work done on reading times (see Demberg and Keller, 2008; Smith and Levy, 2013) , I excluded data from the analysis if it was one of the first or last in a sentence, contained nonalphabetical characters (including punctuation), was a proper noun, was at the beginning or end of a line, or was skipped during reading. I also excluded the next three words that followed any excluded words to account for spillover in the regression. This left me with 383,791 data points. For the RNN, I additionally removed any data (and the three following words) that was not part of the Wikipedia vocabulary.", "cite_spans": [ { "start": 46, "end": 71, "text": "Demberg and Keller, 2008;", "ref_id": "BIBREF4" }, { "start": 72, "end": 93, "text": "Smith and Levy, 2013)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": "2.1" }, { "text": "As a second analysis, I restricted the data solely to nouns, as well as to nouns and verbs (see Demberg and Keller, 2008) , given that DLT only makes its predictions for discourse referents.", "cite_spans": [ { "start": 96, "end": 121, "text": "Demberg and Keller, 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": "2.1" }, { "text": "For calculating IC, I used the Dundee Treebank (Barrett et al., 2015) , a hand-parsed Universal Dependencies style treebank of texts from the Dundee Corpus. This hand-parsed dataset is more accurate than the automatic parser used by Demberg and Keller (2008) . To account for syntactic traces, which are not explicitly marked in the annotation, I added traces based on the dependency relations in the parsed sentence. Traces contributed a cost of one as intervening referents, and were added after the following UD relations: acl:relcl, ccomp, dobj, nsubj:pass, and nmod, as in Howcroft and Demberg (2017).", "cite_spans": [ { "start": 47, "end": 69, "text": "(Barrett et al., 2015)", "ref_id": "BIBREF0" }, { "start": 233, "end": 258, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Integration Cost", "sec_num": "2.2" }, { "text": "I used two language models (LMs) to calculate Surprisal. While earlier work has relied on PCFGs and n-grams to estimate surprisal, some recent work suggests that these neural models are capable of learning and generating syntactic representations to the same degree as grammar-based LMs (van Schijndel and Linzen, 2018). Thus, I used neural LMs in order to generate probability distributions without explicitly encoding symbolic syntax.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisal Models", "sec_num": "2.3" }, { "text": "The first model was a recurrent neural network (RNN) model from Gulordava et al. 2018 second model was the GPT-2 Transformer model from Radford et al. (2019) . This study used the 1.5 billion parameter version of GPT-2 trained on the English WebText corpus.", "cite_spans": [ { "start": 136, "end": 157, "text": "Radford et al. (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Surprisal Models", "sec_num": "2.3" }, { "text": "The reading times used for the analyses were first pass gaze durations. As in previous work (Boston et al., 2008; Demberg and Keller, 2008; Monsalve et al., 2012) , IC and estimated surprisal values were entered into a mixed-effects model in order to account for other predictor and random effects. I used lme4 to construct linear models, and obtained approximate p-values via Satterthwaite's degrees of freedom with the lmerTest package (Bates et al., 2015; Kuznetsova et al., 2017) . To account for spillover effects, where the processing difficulty of prior word impacts the reading time of the current word (Rayner, 1998) , as in previous work (see Smith and Levy, 2013; Wilcox et al., 2020) I used the previous word in the model:", "cite_spans": [ { "start": 92, "end": 113, "text": "(Boston et al., 2008;", "ref_id": "BIBREF2" }, { "start": 114, "end": 139, "text": "Demberg and Keller, 2008;", "ref_id": "BIBREF4" }, { "start": 140, "end": 162, "text": "Monsalve et al., 2012)", "ref_id": "BIBREF21" }, { "start": 438, "end": 458, "text": "(Bates et al., 2015;", "ref_id": "BIBREF1" }, { "start": 459, "end": 483, "text": "Kuznetsova et al., 2017)", "ref_id": "BIBREF15" }, { "start": 611, "end": 625, "text": "(Rayner, 1998)", "ref_id": "BIBREF23" }, { "start": 653, "end": 674, "text": "Smith and Levy, 2013;", "ref_id": "BIBREF24" }, { "start": 675, "end": 695, "text": "Wilcox et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "2.4" }, { "text": "rt \u223c s 0 + s 1 + l * f + l 1 * f 1 + p + (1 | subj) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "2.4" }, { "text": "Here, s refers to the surprisal or IC, s 1 indicates the surprisal/IC of the previous word, l is word length, f is frequency, l * f indicates that there is a relationship between l and f , and p is the word position. Additionally, I performed GAM regressions on the raw surprisals. I also examined the correlation between the surprisal estimates and IC. Table 2 shows the coefficients of the regression for the RNN and GPT-2 surprisal estimates. The RNN and GPT-2 surprisal regressions resulted in significant positive coefficients, with spillover effects contributing strongly to reading times. The GAM regressions are shown by Figure 2 . Surprisal of w n had a strong linear effect in both models, as well as a slightly weaker effect for w n\u22121 . Table 3 shows the coefficients for the IC regression on the Dundee Corpus. There was significant negative coefficient for integration cost across the full dataset, with insignificant spillover effects (p = 0.49 nouns and verbs missed significance by a wide margin. For the RNN and GPT-2, regressions on solely nouns were similar to those on all data, with coefficients of 1.75 and 1.560 for s 0 .", "cite_spans": [], "ref_spans": [ { "start": 354, "end": 361, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 629, "end": 637, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 748, "end": 755, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Analysis", "sec_num": "2.4" }, { "text": "There was minimal correlation between surprisal and IC across both models, and moderately high correlation between GPT-2 and RNN surprisal values (Table 4 ). The results from the regression containing both IC and Surprisal are shown in Table 1 . Surprisal continued to be a significant positive predictor, whereas IC was a significant negative predictor, albeit weaker than on it's own. On nouns, IC was again a much stronger positive predictor. Again, spillover effects for IC were insignificant.", "cite_spans": [], "ref_spans": [ { "start": 146, "end": 154, "text": "(Table 4", "ref_id": "TABREF5" }, { "start": 236, "end": 243, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3" }, { "text": "This study examined the strength of two different theories of processing difficulty as predictors of eye-tracking data. Overall, neural surprisal has a significant positive relationship with reading times, indicating that it is a strong candidate for a broadcoverage model of sentence processing difficulty. Contrary to the predictions of DLT, there was a significant negative relationship between reading times and integration cost, as in Demberg and Keller (2008) . This negative coefficient is likely due to the fact that DLT only makes its reading time predictions for discourse referents, assigning non-referents a processing difficulty of zero. When comparing IC solely to noun reading times, there was a strong positive coefficient, as expected. Additionally, dependency locality has a well-documented crosslinguistic impact on word order (Futrell et al., 2015; Liu et al., 2017; Temperley and Gildea, 2018) , suggesting that a modified form of IC which predicts non-discourse referent processing difficulties may be a stronger and more accurate model.", "cite_spans": [ { "start": 440, "end": 465, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF4" }, { "start": 846, "end": 868, "text": "(Futrell et al., 2015;", "ref_id": "BIBREF5" }, { "start": 869, "end": 886, "text": "Liu et al., 2017;", "ref_id": "BIBREF19" }, { "start": 887, "end": 914, "text": "Temperley and Gildea, 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "4" }, { "text": "Our results for surprisal are promising evidence that Surprisal Theory can accurately measure sentence processing difficulty. As hypothesised by Surprisal Theory, there was a positive linear effect for both GPT-2 and the RNN. This differs from Demberg and Keller (2008) , who found that lexicalized surprisal had an insignificant correlation with reading times from a grammar-based LM. As the corpus used in this study was identical to that in Demberg and Keller (2008) , these findings support work which indicates that neural LMs are capable of simulating human language processing better than grammar-based LMs (Monsalve et al., 2012; van Schijndel and Linzen, 2018) . I also found a moderately high correlation between RNN and GPT-2 surprisal values, implying that neither model significantly differs from the other.", "cite_spans": [ { "start": 244, "end": 269, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF4" }, { "start": 444, "end": 469, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF4" }, { "start": 614, "end": 637, "text": "(Monsalve et al., 2012;", "ref_id": "BIBREF21" }, { "start": 638, "end": 669, "text": "van Schijndel and Linzen, 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "All Data", "sec_num": null }, { "text": "Similarly to Demberg and Keller (2008) , IC and neural surprisal were minimally correlated. When both were added as factors in a mixed effects model, the results remained similar, with IC being negative for all data, and strongly positive for nouns. Given our results as a whole, this suggests that as IC is a strong predictor for nouns, a true broad-coverage model must integrate ideas from both DLT and Surprisal Theory. While I did not note any major gaps in predictions of surprisal, other work has found that it cannot fully account for reading time differences in ambiguities (van Schijndel and Linzen, 2018) . Our positive results are in part due to the fact that the Dundee Corpus consists mostly of common syntactic constructions, and therefore does not provide a perfect generalized picture of sentence processing. Thus, this work is consistent with the hypothesis that while appealing, a broad-coverage measure of processing difficulty cannot simply use one model of processing. Potential future work could aim to combine expectation-based models with memory-based theories, such that processing involves both discarding potential representations and integration into the prior structure.", "cite_spans": [ { "start": 13, "end": 38, "text": "Demberg and Keller (2008)", "ref_id": "BIBREF4" }, { "start": 582, "end": 614, "text": "(van Schijndel and Linzen, 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "All Data", "sec_num": null }, { "text": "I would like to thank Richard Futrell and Michael Hahn for their helpful comments, as well as the anonymous reviewers for their feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "5" }, { "text": "The RNN consisted of two LSTM layers with 650 units each, with a batch size of 128 and a dropout rate of 0.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Dundee treebank", "authors": [ { "first": "Maria", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "The 14th International Workshop on Treebanks and Linguistic Theories (TLT 14)", "volume": "", "issue": "", "pages": "242--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Barrett, \u017deljko Agi\u0107, and Anders S\u00f8gaard. 2015. The Dundee treebank. In The 14th International Workshop on Treebanks and Linguistic Theories (TLT 14), pages 242-248.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Fitting linear mixed-effects models using lme4", "authors": [ { "first": "Douglas", "middle": [ "M" ], "last": "Bates", "suffix": "" }, { "first": "Martin", "middle": [], "last": "M\u00e4chler", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Bolker", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2015, "venue": "Journal of Statistical Software", "volume": "67", "issue": "1", "pages": "1--48", "other_ids": { "DOI": [ "10.18637/jss.v067.i01" ] }, "num": null, "urls": [], "raw_text": "Douglas M. Bates, Martin M\u00e4chler, Ben Bolker, and Steve Walker. 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1):1-48.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Parsing costs as predictors of reading difficulty: An evaluation using the Potsdam Sentence Corpus", "authors": [ { "first": "Marisa", "middle": [ "Ferrara" ], "last": "Boston", "suffix": "" }, { "first": "John", "middle": [ "T" ], "last": "Hale", "suffix": "" }, { "first": "Reinhold", "middle": [], "last": "Kliegl", "suffix": "" } ], "year": 2008, "venue": "Journal of Eye Movement Research", "volume": "2", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marisa Ferrara Boston, John T. Hale, Reinhold Kliegl, Umesh Patil, and Shravan Vasishth. 2008. Parsing costs as predictors of reading difficulty: An evalua- tion using the Potsdam Sentence Corpus. Journal of Eye Movement Research, 2(1).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Parallel processing and sentence comprehension difficulty", "authors": [ { "first": "Marisa", "middle": [ "Ferrara" ], "last": "Boston", "suffix": "" }, { "first": "John", "middle": [ "T" ], "last": "Hale", "suffix": "" }, { "first": "Shravan", "middle": [], "last": "Vasishth", "suffix": "" }, { "first": "Reinhold", "middle": [], "last": "Kliegl", "suffix": "" } ], "year": 2011, "venue": "Language and Cognitive Processes", "volume": "26", "issue": "3", "pages": "301--349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marisa Ferrara Boston, John T. Hale, Shravan Vasishth, and Reinhold Kliegl. 2011. Parallel processing and sentence comprehension difficulty. Language and Cognitive Processes, 26(3):301-349.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Data from eyetracking corpora as evidence for theories of syntactic processing complexity", "authors": [ { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2008, "venue": "Cognition", "volume": "109", "issue": "2", "pages": "193--210", "other_ids": { "DOI": [ "10.1016/j.cognition.2008.07.008" ] }, "num": null, "urls": [], "raw_text": "Vera Demberg and Frank Keller. 2008. Data from eye- tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193-210.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Large-scale evidence of dependency length minimization in 37 languages", "authors": [ { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Mahowald", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the National Academy of Sciences", "volume": "112", "issue": "33", "pages": "10336--10341", "other_ids": { "DOI": [ "10.1073/pnas.1502134112" ] }, "num": null, "urls": [], "raw_text": "Richard Futrell, Kyle Mahowald, and Edward Gibson. 2015. Large-scale evidence of dependency length minimization in 37 languages. Proceedings of the National Academy of Sciences, 112(33):10336- 10341.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural language models as psycholinguistic subjects: Representations of syntactic state", "authors": [ { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Morita", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "32--42", "other_ids": { "DOI": [ "10.18653/v1/N19-1004" ] }, "num": null, "urls": [], "raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic sub- jects: Representations of syntactic state. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32-42, Minneapolis, Minnesota. Association for Computational Linguis- tics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Linguistic complexity: Locality of syntactic dependencies", "authors": [ { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 1998, "venue": "Cognition", "volume": "68", "issue": "1", "pages": "1--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Gibson. 1998. Linguistic complexity: Locality of syntactic dependencies. Cognition, 68(1):1-76.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The dependency locality theory: A distance-based theory of linguistic complexity", "authors": [ { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 2000, "venue": "Image, Language, Brain: Papers from the First Mind Articulation Project Symposium", "volume": "", "issue": "", "pages": "95--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Gibson. 2000. The dependency locality the- ory: A distance-based theory of linguistic complex- ity. In Image, Language, Brain: Papers from the First Mind Articulation Project Symposium, pages 95-126, Cambridge, MA. MIT Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Colorless green recurrent networks dream hierarchically", "authors": [ { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of NAACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A probabilistic Earley parser as a psycholinguistic model", "authors": [ { "first": "T", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Hale", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics and Language Technologies", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "John T. Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the Sec- ond Meeting of the North American Chapter of the Association for Computational Linguistics and Lan- guage Technologies, pages 1-8.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": { "DOI": [ "10.18653/v1/N19-1419" ] }, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word repre- sentations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Psycholinguistic models of sentence processing improve sentence readability ranking", "authors": [ { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Howcroft", "suffix": "" }, { "first": "", "middle": [], "last": "Demberg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "958--968", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Howcroft and Vera Demberg. 2017. Psy- cholinguistic models of sentence processing im- prove sentence readability ranking. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 958-968, Valencia, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A probabilistic model of lexical and syntactic access and disambiguation", "authors": [ { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 1996, "venue": "Cognitive Science", "volume": "20", "issue": "2", "pages": "137--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Jurafsky. 1996. A probabilistic model of lexical and syntactic access and disambiguation. Cognitive Science, 20(2):137-194.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Dundee corpus. Poster presented at the 12th European Conference on Eye Movement", "authors": [ { "first": "Alan", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Jo\u00ebl", "middle": [], "last": "Pynte", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Kennedy, Robin Hill, and Jo\u00ebl Pynte. 2003. The Dundee corpus. Poster presented at the 12th Euro- pean Conference on Eye Movement.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "lmerTest package: Tests in linear mixed effects models", "authors": [ { "first": "Alexandra", "middle": [], "last": "Kuznetsova", "suffix": "" }, { "first": "B", "middle": [], "last": "Per", "suffix": "" }, { "first": "", "middle": [], "last": "Brockhoff", "suffix": "" }, { "first": "H", "middle": [ "B" ], "last": "Rune", "suffix": "" }, { "first": "", "middle": [], "last": "Christensen", "suffix": "" } ], "year": 2017, "venue": "Journal of Statistical Software", "volume": "82", "issue": "13", "pages": "1--26", "other_ids": { "DOI": [ "10.18637/jss.v082.i13" ] }, "num": null, "urls": [], "raw_text": "Alexandra Kuznetsova, Per B. Brockhoff, and Rune H. B. Christensen. 2017. lmerTest package: Tests in linear mixed effects models. Journal of Statisti- cal Software, 82(13):1-26.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Expectation-based syntactic comprehension", "authors": [ { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2008, "venue": "Cognition", "volume": "106", "issue": "3", "pages": "1126--1177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Levy. 2008. Expectation-based syntactic com- prehension. Cognition, 106(3):1126-1177.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Memory and surprisal in human sentence comprehension", "authors": [ { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2013, "venue": "Sentence Processing", "volume": "", "issue": "", "pages": "78--114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Levy. 2013. Memory and surprisal in human sentence comprehension. In Roger P. G. van Gom- pel, editor, Sentence Processing, page 78-114. Hove: Psychology Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "An activation-based model of sentence processing as skilled memory retrieval", "authors": [ { "first": "Richard", "middle": [ "L" ], "last": "Lewis", "suffix": "" }, { "first": "Shravan", "middle": [], "last": "Vasishth", "suffix": "" } ], "year": 2005, "venue": "Cognitive Science", "volume": "29", "issue": "3", "pages": "375--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard L. Lewis and Shravan Vasishth. 2005. An activation-based model of sentence processing as skilled memory retrieval. Cognitive Science, 29(3):375-419.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dependency distance: A new perspective on syntactic patterns in natural languages", "authors": [ { "first": "Haitao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Chunshan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Junying", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "Physics of Life Reviews", "volume": "21", "issue": "", "pages": "171--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haitao Liu, Chunshan Xu, and Junying Liang. 2017. Dependency distance: A new perspective on syntac- tic patterns in natural languages. Physics of Life Re- views, 21:171-193.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Emergent linguistic structure in artificial neural networks trained by self-supervision", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the National Academy of Sciences", "volume": "117", "issue": "", "pages": "30046--30054", "other_ids": { "DOI": [ "10.1073/pnas.1907367117" ] }, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy. 2020. Emer- gent linguistic structure in artificial neural networks trained by self-supervision. Proceedings of the Na- tional Academy of Sciences, 117(48):30046-30054.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Lexical surprisal as a general predictor of reading time", "authors": [ { "first": "Irene", "middle": [], "last": "Fernandez Monsalve", "suffix": "" }, { "first": "Stefan", "middle": [ "L" ], "last": "Frank", "suffix": "" }, { "first": "Gabriella", "middle": [], "last": "Vigliocco", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "398--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Fernandez Monsalve, Stefan L. Frank, and Gabriella Vigliocco. 2012. Lexical surprisal as a general predictor of reading time. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 398-408, Avignon, France. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "A", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "R", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language mod- els are unsupervised multitask learners.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Eye movements in reading and information processing: 20 years of research", "authors": [ { "first": "Keith", "middle": [], "last": "Rayner", "suffix": "" } ], "year": 1998, "venue": "Psychological Bulletin", "volume": "124", "issue": "", "pages": "372--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psy- chological Bulletin, 124:372-422.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The effect of word predictability on reading time is logarithmic", "authors": [ { "first": "Nathaniel", "middle": [ "J" ], "last": "Smith", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2013, "venue": "Cognition", "volume": "128", "issue": "3", "pages": "302--319", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathaniel J. Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302-319.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Minimizing syntactic dependency lengths: Typological/cognitive universal?", "authors": [ { "first": "David", "middle": [], "last": "Temperley", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2018, "venue": "Annual Review of Linguistics", "volume": "4", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Temperley and Daniel Gildea. 2018. Min- imizing syntactic dependency lengths: Typologi- cal/cognitive universal? Annual Review of Linguis- tics, 4:1-15.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Modeling garden path effects without explicit hierarchical syntax", "authors": [ { "first": "Marten", "middle": [], "last": "Van Schijndel", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 40th Annual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marten van Schijndel and Tal Linzen. 2018. Modeling garden path effects without explicit hierarchical syn- tax. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "On the predictive power of neural language models for human real-time comprehension behavior", "authors": [ { "first": "Ethan", "middle": [ "Gotlieb" ], "last": "Wilcox", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.01912" ] }, "num": null, "urls": [], "raw_text": "Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger Levy. 2020. On the predic- tive power of neural language models for human real-time comprehension behavior. arXiv preprint arXiv:2006.01912.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Dependency Locality Theory integration costs" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "GAM plots from RNN (blue) and GPT-2 (red) surprisals at words n through n \u2212 3. Shaded region indicates a 95% confidence interval." }, "TABREF0": { "type_str": "table", "num": null, "text": "trained on 90 million words of English Wikipedia. 1 The", "content": "
All DataNouns
RNNGPT-2RNNGPT-2
Coeff.pCoeff.pCoeff.pCoeff.p
Intercept 164.1*** 170.0*** 144.0 *** 154.6***
s 01.847*** 1.606*** 1.752 *** 1.561***
s 11.738*** 0.853*** 2.042 *** 0.864***
IC-0.823 *** -0.767** 1.374*1.593*
IC 1-0.566-0.13320.154-0.957
", "html": null }, "TABREF1": { "type_str": "table", "num": null, "text": "). Restricting data solely to nouns yields a strong positive coefficient. A model fit on both", "content": "
RNNGPT-2
Coeff.pCoeff.p
Intercept 163.9 ***169.8 ***
s 01.826 ***1.609 ***
s 11.733 ***0.854 ***
", "html": null }, "TABREF2": { "type_str": "table", "num": null, "text": "Surprisal regression results from RNN and GPT-2. *** p < 0.001, ** p < 0.01, * p < 0.05.", "content": "
All DataNouns
Coeff.pCoeff.p
Intercept 166.8***153.6 ***
IC-1.298 ***1.134*
IC 1-0.2010.127
", "html": null }, "TABREF3": { "type_str": "table", "num": null, "text": "IC regression results for all data and nouns. *** p < 0.001, ** p < 0.01, * p < 0.05.", "content": "", "html": null }, "TABREF5": { "type_str": "table", "num": null, "text": "Correlations (Pearson's r) between surprisal and IC for all data and nouns only, p < 0.001 for all.", "content": "
", "html": null } } } }