{ "paper_id": "D08-1025", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:01.633183Z" }, "title": "A noisy-channel model of rational human sentence comprehension under uncertain input", "authors": [ { "first": "Roger", "middle": [], "last": "Levy", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California -San Diego", "location": { "addrLine": "9500 Gilman Drive #0108 La Jolla", "postCode": "92093-0108", "region": "CA" } }, "email": "rlevy@ling.ucsd.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Language comprehension, as with all other cases of the extraction of meaningful structure from perceptual input, takes places under noisy conditions. If human language comprehension is a rational process in the sense of making use of all available information sources, then we might expect uncertainty at the level of word-level input to affect sentence-level comprehension. However, nearly all contemporary models of sentence comprehension assume clean input-that is, that the input to the sentence-level comprehension mechanism is a perfectly-formed, completely certain sequence of input tokens (words). This article presents a simple model of rational human sentence comprehension under noisy input, and uses the model to investigate some outstanding problems in the psycholinguistic literature for theories of rational human sentence comprehension. We argue that by explicitly accounting for inputlevel noise in sentence processing, our model provides solutions for these outstanding problems and broadens the scope of theories of human sentence comprehension as rational probabilistic inference.", "pdf_parse": { "paper_id": "D08-1025", "_pdf_hash": "", "abstract": [ { "text": "Language comprehension, as with all other cases of the extraction of meaningful structure from perceptual input, takes places under noisy conditions. If human language comprehension is a rational process in the sense of making use of all available information sources, then we might expect uncertainty at the level of word-level input to affect sentence-level comprehension. However, nearly all contemporary models of sentence comprehension assume clean input-that is, that the input to the sentence-level comprehension mechanism is a perfectly-formed, completely certain sequence of input tokens (words). This article presents a simple model of rational human sentence comprehension under noisy input, and uses the model to investigate some outstanding problems in the psycholinguistic literature for theories of rational human sentence comprehension. We argue that by explicitly accounting for inputlevel noise in sentence processing, our model provides solutions for these outstanding problems and broadens the scope of theories of human sentence comprehension as rational probabilistic inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Considering the adversity of the conditions under which linguistic communication takes place in everyday life-ambiguity of the signal, environmental competition for our attention, speaker error, and so forth-it is perhaps remarkable that we are as successful at it as we are. Perhaps the leading explanation of this success is that (a) the linguistic signal is redundant, and (b) diverse information sources are generally available that can help us obtain infer the intended message (or something close enough) when comprehending an utterance (Tanenhaus et al., 1995; Altmann and Kamide, 1999; Charniak, 2002, 2003; Aylett and Turk, 2004; Keller, 2004; Levy and Jaeger, 2007) . Given the difficulty of this task coupled with the availability of redundancy and useful information sources, it would seem rational for all available information to be used to its fullest in sentence comprehension. This idea is either implicit or explicit in several interactivist theories of probabilistic language comprehension (Jurafsky, 1996; Hale, 2001; Narayanan and Jurafsky, 2002; Levy, 2008) . However, these theories have implicitly assumed a partitioning of interactivity that distinguishes the word as a fundamental level of linguistic information processing: word recognition is an evidential process whose output is nonetheless a specific \"winner-takes-all\" sequence of words, which is in turn the input to an evidential sentencecomprehension process. It is theoretically possible that this partition is real and is an optimal solution to the problem of language comprehension under gross architectural constraints that favor modularity (Fodor, 1983) . On the other hand, it is also possible that this partition has been a theoretical convenience but that, in fact, evidence at the sub-word level plays an important role in sentence processing, and that sentence-level information can in turn affect word recognition. If the latter is the case, then the question arises of how we might model this type of information flow, and what consequences it might have for our understanding of human language comprehension. This article employs the well-understood formalisms of probabilistic context-free grammars (PCFGs) and weighted finite-state automata (wF-SAs) to propose a novel yet simple noisy-channel probabilistic model of sentence comprehension under circumstances where there is uncertainty about word-level representations. Section 2 introduces this model. We use this new model to investigate two outstanding problems for the theory of rational sentence comprehension: one involving global inference-the beliefs that a human comprehender arrives at regarding the meaning of a sentence after reading it in its entirety (Section 3)-and one involving incremental inference-the beliefs that a comprehender forms and updates moment by moment while reading each part of it (Section 4). The common challenge posed by each of these problems is an apparent failure on the part of the comprehender to use information made available in one part of a sentence to rule out an interpretation of another part of the sentence that is inconsistent with this information. In each case, we will see that the introduction of uncertainty into the input representation, coupled with noisy-channel inference, provides a unified solution within a theory of rational comprehension.", "cite_spans": [ { "start": 543, "end": 567, "text": "(Tanenhaus et al., 1995;", "ref_id": "BIBREF36" }, { "start": 568, "end": 593, "text": "Altmann and Kamide, 1999;", "ref_id": "BIBREF1" }, { "start": 594, "end": 615, "text": "Charniak, 2002, 2003;", "ref_id": null }, { "start": 616, "end": 638, "text": "Aylett and Turk, 2004;", "ref_id": "BIBREF2" }, { "start": 639, "end": 652, "text": "Keller, 2004;", "ref_id": "BIBREF23" }, { "start": 653, "end": 675, "text": "Levy and Jaeger, 2007)", "ref_id": "BIBREF27" }, { "start": 1009, "end": 1025, "text": "(Jurafsky, 1996;", "ref_id": "BIBREF22" }, { "start": 1026, "end": 1037, "text": "Hale, 2001;", "ref_id": "BIBREF16" }, { "start": 1038, "end": 1067, "text": "Narayanan and Jurafsky, 2002;", "ref_id": "BIBREF29" }, { "start": 1068, "end": 1079, "text": "Levy, 2008)", "ref_id": "BIBREF25" }, { "start": 1630, "end": 1643, "text": "(Fodor, 1983)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The use of generative probabilistic grammars for parsing is well understood (e.g., Charniak, 1997; Collins, 1999) . The problem of using a probabilistic grammar G to find the \"best parse\" T for a known input string w is formulated as 1", "cite_spans": [ { "start": 83, "end": 98, "text": "Charniak, 1997;", "ref_id": "BIBREF4" }, { "start": 99, "end": 113, "text": "Collins, 1999)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "1 By assumption, G is defined such that its complete productions T completely specify the string, such that P (w|T ) is non-zero for only one value of w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "arg max T P G (T |w) (I)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "but a generative grammar directly defines the joint distribution P G (T, w) rather than the conditional distribution. In this case, Bayes' rule is used to find the posterior:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P G (T |w) = P (T, w) P (w) (II) \u221d P (T, w)", "eq_num": "(III)" } ], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "If the input string is unknown, the problem changes. Suppose we have some noisy evidence I that determines a probability distribution over input strings P (w|I). We can still use Bayes' rule to obtain the posterior:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "P G (T |I) = P (T, I) P (I) (IV) \u221d w P (I|T, w)P (w|T )P (T ) (V)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "Likewise, if we are focused on inferring which words were seen given an uncertain input, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "P G (w|I) \u221d T P (I|T, w)P (w|T )P (T ) (VI)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence comprehension under uncertain input", "sec_num": "2" }, { "text": "This paper considers situations such as controlled psycholinguistic experiments where we (the researchers) know the sentence w * presented to a comprehender, but do not know the specific input I that the comprehender obtains. In this case, if we are, for example, interested in the expected inferences of a rational comprehender about what word string she was exposed to, the probability distribution of interest is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty for a Known Input", "sec_num": "2.1" }, { "text": "P (w|w * ) = I P C (w|I, w * )P T (I|w * ) dI (VII)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty for a Known Input", "sec_num": "2.1" }, { "text": "where P C is the probability distribution used by the comprehender to process perceived input, and P T is the \"true\" probability distribution over the inputs that might actually be perceived given the true sentence. Since the comprehender does not observe w * we must have conditional independence between w and w * given I. We can then apply Bayes' rule to (VII) to obtain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty for a Known Input", "sec_num": "2.1" }, { "text": "P (w|w * ) = I P C (I|w)P C (w) P C (I) P T (I|w * ) dI (VIII) = P C (w) I P C (I|w)P T (I|w * ) P C (I) dI (IX) \u221d P C (w)Q(w, w * ) (X)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Uncertainty for a Known Input", "sec_num": "2.1" }, { "text": "where Q(w, w * ) is proportional to the integral term in Equation (IX). The term P C (w) corresponds to the comprehender's prior beliefs; the integral term is the effect of input uncertainty. If comprehenders model noise rationally, then we should have P C (I|w) = P T (I|w), and thus Q(w, w * ) becomes a symmetric, non-negative function of w and w * ; hence the effect of input uncertainty can be modeled by a kernel function on input string pairs. (Similar conclusions result when the posterior distribution of interest is over structures T .) It is an open question which kernel functions might best model the inferences made in human sentence comprehension. Most obviously the kernel function should account for noise (environmental, perceptual, and attentional) introduced into the signal en route to the neural stage of abstract sentence processing. In addition, this kernel function might also be a natural means of accounting for modeling error such as disfluencies (Johnson and Charniak, 2004) , word/phrase swaps, and even well-formed utterances that the speaker did not intend. For purposes of this paper, we limit ourselves to a simple kernel based on the Levenshtein distance LD(w, w \u2032 ) between words and constructed in the form of a weighted finite-state automaton (Mohri, 1997) .", "cite_spans": [ { "start": 975, "end": 1003, "text": "(Johnson and Charniak, 2004)", "ref_id": "BIBREF21" }, { "start": 1281, "end": 1294, "text": "(Mohri, 1997)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Uncertainty for a Known Input", "sec_num": "2.1" }, { "text": "Suppose that the input word string w * consists of words w 1...n . We define the Levenshtein-distance kernel as follows. Figure 1 : The Levenshtein-distance kernel for multiword string edits. K LD (w * ) is shown for \u03a3 = {cat,sat,a}, w * = (a cat sat), and \u03bb = 1. State 0 is the start state, and State 3 is the lone (zero-cost) final state.", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 129, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Levenshtein-distance kernel", "sec_num": "2.2" }, { "text": "and n the (zero-cost) final state. We add two types of arcs to this automaton: (a) substitution/deletion arcs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Levenshtein-distance kernel", "sec_num": "2.2" }, { "text": "(i \u2212 1, w \u2032 ) \u2192 i, i \u2208 1, . . . , n, each with cost \u03bb LD(w i , w \u2032 ), for all w \u2032 \u2208 \u03a3 \u222a {\u01eb}; and (b) in- sertion loop arcs (j, w \u2032 ) \u2192 j, j \u2208 0, . . . , n, each with cost \u03bb LD(\u01eb, w \u2032 ), for all w \u2032 \u2208 \u03a3. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Levenshtein-distance kernel", "sec_num": "2.2" }, { "text": "The resulting wFSA K LD (w * ) defines a function over w such that the summed weight of paths through the wFSA accepting w is log Q(w, w * ). This kernel allows for the possibility of word substitutions (represented by the transition arcs with labels that are neither w i nor \u01eb), word deletions (represented by the transition arcs with \u01eb labels), and even word insertions (represented by the loop arcs). The unnormalized probability of each type of operation is exponential in the Levenshtein distance of the change induced by the operation. The term \u03bb is a free parameter, with smaller values corresponding to noisier input. Figure 1 gives an example of the Levenshtein-distance kernel for a simple vocabulary and sentence. 3", "cite_spans": [], "ref_spans": [ { "start": 626, "end": 634, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Levenshtein-distance kernel", "sec_num": "2.2" }, { "text": "The problem of finding structures or strings with high posterior probability given a particular input string w * is quite similar to the problem faced in the parsing of speech, where the acoustic input I to a parser can be represented as a lattice of possible word sequences, and the edges of the lattice have weights determined by a model of acoustic realization of words, P (I|w) (Collins et al., 2004; Johnson, 2003, 2004) . The two major differences between lattice parsing and our problem are (a) we have integrated out the expected effect of noise, which is thus implicit in our choice of kernel; and (b) the loops in the Levenshtein-distance kernel mean that the input to parsing is no longer a lattice. This latter difference means that some of the techniques applicable to string parsing and lattice parsing -notably the computation of inside probabilities -are no longer possible using exact methods. We return to this difference in Sections 3 and 4.", "cite_spans": [ { "start": 382, "end": 404, "text": "(Collins et al., 2004;", "ref_id": "BIBREF7" }, { "start": 405, "end": 425, "text": "Johnson, 2003, 2004)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Efficient computation of posterior beliefs", "sec_num": "2.3" }, { "text": "One clear prediction of the uncertain-input model of (VII)-(X) is that under appropriate circumstances, the prior expectations P C (w) of the comprehender should in principle be able to override the linguistic input actually presented, so that a sentence is interpreted as meaning-and perhaps even being-something other than it actually meant or was. At one level, it is totally clear that comprehenders do this on a regular basis: the ability to do this is required for someone to act as a copy editorthat is, to notice and (crucially) correct mistakes on the printed page. In many cases, these types of correction happen at a level that may be below consciousness-thus we sometimes miss a typo but interpret the sentences as it was intended, or ignore the disfluency of a speaker. What has not been previously proposed in a formal model, however, is that this can happen even when an input is a completely grammatical sentence. Here, we argue that an effect demonstrated by Christianson et al. (2001) (see also Ferreira et al., 2002) is an example of expectations overriding input. When presented sentences of the forms in (1) using methods that did not permit rereading, and asked questions of the type Did the man hunt the deer?, experimental participants gave affirmative responses significantly more often for sentences of type (1a), in which the substring the man hunted the deer appears, than for either (1b) or (1c).", "cite_spans": [ { "start": 976, "end": 1002, "text": "Christianson et al. (2001)", "ref_id": "BIBREF6" }, { "start": 1013, "end": 1035, "text": "Ferreira et al., 2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Global inference", "sec_num": "3" }, { "text": "(1) a. While the man hunted the deer ran into the woods. (GARDENPATH) b. While the man hunted the pheasant the deer ran into the woods. (TRANSITIVE) c. While the man hunted, the deer ran into the woods. (COMMA)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global inference", "sec_num": "3" }, { "text": "This result was interpreted by Christianson et al. (2001) and Ferreira et al. (2002) as reflecting (i) the fact that there is a syntactic garden path in (1a)-after reading the first six words of the sentence, the preferred interpretation of the substring the man hunted the deer is as a simple clause indicating that the deer was hunted by the man-and (ii) that readers were not always successful at revising away this interpretation when they saw the disambiguating verb ran, which signals that the deer is actually the subject of the main clause, and that hunted must therefore be intransitive. Furthermore (and crucially), for (1a) participants also responded affirmatively most of the time to questions of the type Did the deer run into the woods? This result is a puzzle for existing models of sentence comprehension because no grammatical analysis exists of any substring of (1a) for which the deer is both the object of hunted and the subject of ran. In fact, no formal model has yet been proposed to account for this effect.", "cite_spans": [ { "start": 31, "end": 57, "text": "Christianson et al. (2001)", "ref_id": "BIBREF6" }, { "start": 62, "end": 84, "text": "Ferreira et al. (2002)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Global inference", "sec_num": "3" }, { "text": "The uncertain-input model gives us a means of accounting for these results, because there are near neighbors of (1a) for which there is a global grammatical analysis in which either the deer or a coreferent NP is in fact the object of the subordinateclause verb hunted. In particular, inserting the word it either before or after the deer creates such a near neighbor:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global inference", "sec_num": "3" }, { "text": "(2) a. While the man hunted the deer it ran into the woods. b. While the man hunted it the deer ran into the woods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global inference", "sec_num": "3" }, { "text": "We formalize this intuition within our model by using the wFSA representation of the Levenshtein- -Hillel et al., 1964; Nederhof and Satta, 2003) . This intersection thus represents the unnormalized posterior P C (T, w|w * ). Because there are loops in the wFSA generated by the Levenshtein-distance kernel, exact normalization of the posterior is not tractable (though see Nederhof and Satta, 2003; Chi, 1999; Smith and Johnson, 2007 for possible approaches to approximating the normalization constant). We can, however, use the lazy k-best algorithm of Huang and Chiang (2005; Algorithm 3) to obtain the word-string/parsetree pairs with highest posterior probability.", "cite_spans": [ { "start": 98, "end": 119, "text": "-Hillel et al., 1964;", "ref_id": "BIBREF3" }, { "start": 120, "end": 145, "text": "Nederhof and Satta, 2003)", "ref_id": "BIBREF30" }, { "start": 374, "end": 399, "text": "Nederhof and Satta, 2003;", "ref_id": "BIBREF30" }, { "start": 400, "end": 410, "text": "Chi, 1999;", "ref_id": "BIBREF5" }, { "start": 411, "end": 434, "text": "Smith and Johnson, 2007", "ref_id": "BIBREF34" }, { "start": 555, "end": 578, "text": "Huang and Chiang (2005;", "ref_id": "BIBREF19" }, { "start": 579, "end": 579, "text": "", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Global inference", "sec_num": "3" }, { "text": "ROOT \u2192 S PUNCT. 0.0 S \u2192 SBAR S 6.3 S \u2192 SBAR PUNCT S 4.6 PUNCT S \u2192 , S 0.0 S \u2192 NP VP 0.1 SBAR \u2192 IN S 0.0 NP \u2192 DT NN 1.9 NP \u2192 NNS 4.4 NP \u2192 NNP 3.3 NP \u2192 DT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Global inference", "sec_num": "3" }, { "text": "To test our account of the rational noisy-channel interpretation of sentences such as (1), we defined a small PCFG using the phrasal rules listed in Figure 2 , with rule probabilities estimated from the parsed Brown corpus. 4 Lexical rewrite probabilities were determined using relative-frequency estimation over the entire parsed Brown corpus. For each of the sentence sets like (1) used in Experiments 1a, 1b, and 2 of Christianson et al. (2001) that have complete lexical coverage in the parsed Brown corpus (22 sets in total), a noisy-input wFSA was constructed using K LD , permitting all words occurring more than 2500 times in the parsed Brown corpus as possible edit/insertion targets. 5 Figure 3 shows the average proportion of parse trees among the 100 best parses in the intersection between this PCFG and the wFSA for each sentence for which an interpretation is available such that the deer or a coreferent NP is the direct object of hunted. 6 The Levenshtein distance penalty \u03bb is a free parameter in the model, but the results are consistent for a wide range of \u03bb: interpretations of type (2) are more prevalent both in terms of number mass for (1a) than for either (1b) or (1c). Furthermore, across 9 noise values for 22 sentence sets, there were never more interpretations of type (2) for COMMA sentences than for the corresponding GARDENPATH sentences, and in only one case were there more such interpretations for a TRANSITIVE sentence than for the corresponding GARDENPATH sentence.", "cite_spans": [ { "start": 422, "end": 448, "text": "Christianson et al. (2001)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 149, "end": 158, "text": "Figure 2", "ref_id": null }, { "start": 697, "end": 705, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experimental Verification", "sec_num": "3.1" }, { "text": "We begin taking up the role of input uncertainty for incremental comprehension by posing a question:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "4 Counts of these rules were obtained using tgrep2/Tregex tree-matching patterns (Rohde, 2005; Levy and Andrew, 2006) , available online at http://idiom.ucsd.edu/\u02dcrlevy/papers/ emnlp2008/tregex_patterns.", "cite_spans": [ { "start": 81, "end": 94, "text": "(Rohde, 2005;", "ref_id": "BIBREF32" }, { "start": 95, "end": 117, "text": "Levy and Andrew, 2006)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "We have also investigated the use of broad-coverage PCFGs estimated using standard treebank-based techniques, but found that the computational cost of inference with treebank-sized grammars was prohibitive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "5 The word-frequency cutoff was introduced for computational speed; we have obtained qualitatively similar results with lower word-frequency cutoffs. 6 We took a parse tree to satisfy this criterion if the NP the deer appeared either as the matrix-clause subject or the embedded-clause object, and a pronoun appeared in the other position. In a finer-grained grammatical model, number/gender agreement would be enforced between such a pronoun and the NP in the posterior, so that the probability mass for these parses would be concentrated on cases where the pronoun is it. what is the optimal way to read a sentence on a page (Legge et al., 1997) ? Presumably, the goal of reading is to find a good compromise between scanning the contents of the sentence as quickly as possible while achieving an accurate understanding of the sentence's meaning. To a first approximation, humans solve this problem by reading each sentence in a document from beginning to end, regardless of the actual layout; whether this general solution is best understood in terms of optimality or rather as parasitic on spoken language comprehension is an open question beyond the immediate scope of the present paper. However, about 10-15% of eye movements in reading are regressive (Rayner, 1998) , and we may usefully refine our question to when a regressive eye movement might be a good decision. In traditional models of sentence comprehension, the optimal answer would certainly be \"never\", since past observations are known with certainty. But once uncertainty about the past is accounted for, it is clear that there may in principle be situations in which regressive saccades may be the best choice.", "cite_spans": [ { "start": 150, "end": 151, "text": "6", "ref_id": null }, { "start": 627, "end": 647, "text": "(Legge et al., 1997)", "ref_id": "BIBREF24" }, { "start": 1258, "end": 1272, "text": "(Rayner, 1998)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "What are these situations? One possible answer would be: when the uncertainty (e.g., measured by entropy) about an earlier part of the sentence is high. There are some cases in which this is probably the correct answer: many regressive eye movements are very small and the consensus in the eye-movement literature is that they represent corrections for motor error at the saccadic level. That is, the eyes overshoot the intended target and regress to obtain in-formation about what was missed. However, motor error can account only for short, isolated regressions, and about one-sixth of regressions are part of a longer series back into the sentence, into a much earlier part of the text which has already been read. We propose that these regressive saccades might be the best choice when the most recent observed input significantly changes the comprehender's beliefs about the earlier parts of the sentence. To make the discussion more concrete, we turn to another recent result in the psycholinguistic literature that has been argued to be problematic for rational theories of sentence comprehension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "It has been shown (Tabor et al., 2004) that sentences such as (3) below induce considerable processing difficulty at the word tossed, as measured in word-by-word reading times:", "cite_spans": [ { "start": 18, "end": 38, "text": "(Tabor et al., 2004)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "The coach smiled at the player tossed a frisbee. (LOCALLY COHERENT)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "Both intuition and controlled experiments reveal that this difficulty seems due at least in part to the category ambiguity of the word tossed, which is occasionally used as a participial verb but is much more frequently used as a simple-past verb. Although tossed in (3) is actually a participial verb introducing a reduced relative clause (and the player is hence its recipient), most native English speakers find it extremely difficult not to interpret tossed as a main verb and the player as its agent-far more difficult than for corresponding sentences in which the critical participial verb is morphologically distinct from the simple past form ((4a), (4c); c.f. threw) or in which the relative clause is unreduced and thus clearly marked ((4b), (4c)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "(4) a. The coach smiled at the player thrown a frisbee. (LOCALLY INCOHERENT) b. The coach smiled at the player who was tossed a frisbee. c. The coach smiled at the player who was thrown a frisbee.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "The puzzle here for rational approaches to sentence comprehension is that the preceding top-down context provided by The coach smiled at. . . should completely rule out the possibility of seeing a main verb immediately after player, hence a rational com-prehender should not be distracted by the part-ofspeech ambiguity. 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incremental comprehension and error identification", "sec_num": "4" }, { "text": "The solution we pursue to this puzzle lies in the fact that (3) has many near-neighbor sentences in which the word tossed is in fact a simple-past tense verb. Several possibilities are listed below in (5):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An uncertain-input solution", "sec_num": "4.1" }, { "text": "(5) a. The coach who smiled at the player tossed a frisbee. b. The coach smiled as the player tossed a frisbee. c. The coach smiled and the player tossed a frisbee. d. The coach smiled at the player who tossed a frisbee. e. The coach smiled at the player that tossed a frisbee. f. The coach smiled at the player and tossed a frisbee.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An uncertain-input solution", "sec_num": "4.1" }, { "text": "The basic intuition we follow is that simple-past verb tossed is much more probable where it appears in any of (5a)-(5f) than participial tossed is in (3). Therefore, seeing this word causes the comprehender to shift her probability distribution about the earlier part of the sentence away from (3), where it had been peaked, toward its near neighbors such as the examples in (5). This change in beliefs about the past is treated as an error identification signal (EIS). In reading, a sensible response to an EIS would be a slowdown or a regressive saccade; in spoken language comprehension, a sensible response would be to allocate more working memory resources to the comprehension task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An uncertain-input solution", "sec_num": "4.1" }, { "text": "We quantify our proposed error identification signal as follows. Consider the probability distribution over the input up to, but not including, a position j in a sentence w:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying the Error Identification Signal", "sec_num": "4.2" }, { "text": "P (w [0,j) ) (XI)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying the Error Identification Signal", "sec_num": "4.2" }, { "text": "We use the subscripting [0, j) to illustrate that this interval is \"closed\" through to include the beginning of the string, but \"open\" at position j-that is, it includes all material before position j but does not include anything at that position or beyond. Let us then define the posterior distribution after seeing all input up through and including word i as P i (w [0,j) ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying the Error Identification Signal", "sec_num": "4.2" }, { "text": "We define the EIS induced by reading a word w i as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying the Error Identification Signal", "sec_num": "4.2" }, { "text": "D P i (w [0,i) )||P i\u22121 (w [0,i) ) (XII) \u2261 w\u2208{w [0,i)} P i (w) log P i (w) P i\u22121 (w) (XIII)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantifying the Error Identification Signal", "sec_num": "4.2" }, { "text": "where D(q||p) is the Kullback-Leibler divergence, or relative entropy, from p to q, a natural way of quantifying the distance between probability distributions (Cover and Thomas, 1991) which has also been argued for previously in modeling attention and surprise in both visual and linguistic cognition (Itti and Baldi, 2005; Levy, 2008) .", "cite_spans": [ { "start": 160, "end": 184, "text": "(Cover and Thomas, 1991)", "ref_id": "BIBREF9" }, { "start": 302, "end": 324, "text": "(Itti and Baldi, 2005;", "ref_id": "BIBREF20" }, { "start": 325, "end": 336, "text": "Levy, 2008)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Quantifying the Error Identification Signal", "sec_num": "4.2" }, { "text": "As in Section 3, we use a small probabilistic grammar covering the relevant structures in the problem domain to represent the comprehender's knowledge, and a wFSA based on the Levenshtein-distance kernel to represent noisy input. We are interested in comparing the EIS at the word tossed in (3) versus the EIS at the word thrown in (4a). In this case, the interval w [0,j) contains all the material that could possibly have come before the word tossed/thrown, but does not contain material at or after the position introduced by the word itself. Loops in the probabilistic grammar and the Levenshtein-distance kernel pose a challenge, however, to evaluating the EIS, because the normalization constant of the resulting grammar/input intersection is essential to evaluating Equation (XIII). To circumvent this problem, we eliminate loops from the kernel by allowing only one insertion per inter-word space. 8 (See Section 5 for a possible alternative). Figure 4 shows the (finite-state) probabilistic grammar used for the study, with rule probabilities once again determined from the parsed Brown corpus using relative frequency estimation. To calculate the distribution over strings after exposure to the i-th word in the sentence, we \"cut\" the input wFSA such that all transitions and arcs after state 2i+2 were removed and replaced with a sequence of states j = 2i + 3, . . . , m, with zero-cost transitions (j \u2212 1, w \u2032 ) \u2192 j for all w \u2032 \u2208 \u03a3 \u222a {\u01eb}, and each new j Next, map every state i onto a state pair in a new wFSA (2i, 2i + 1), with all incoming arcs in i being incoming into 2i, all outgoing arcs from i being outgoing from 2i + 1, and new transition arcs (2i, w \u2032 ) \u2192 2i + 1 for each w \u2032 \u2208 \u03a3 \u222a {\u01eb} with cost LD(\u01eb, w \u2032 ). Finally, add initial state 0 to the new wFSA with transition arcs to state 1 for all w \u2032 \u2208 \u03a3 \u222a {\u01eb} with cost LD(\u01eb, w \u2032 ). A final state i in the old wFSA corresponds to a final state 2i + 1 in the new wFSA. being a zero-cost final state. 9 Because the intersection between this \"cut\" wFSA and the probabilistic grammar is loop-free, it can be renormalized, and the EIS can be calculated without difficulty. All the computations in this section were carried out using the OpenFST library (Allauzen et al., 2007) . Figure 5 shows the average magnitude of the EIS for sentences (3) versus (4a) at the critical word position tossed/thrown. Once again, the Levenshteindistance penalty \u03bb is a free parameter in the model, so we show model behavior as a function of \u03bb, for the eight sentence pairs in Experiment 1 of Tabor et al. with complete lexical and syntactic coverage for the grammar of Figure 4 . For values of \u03bb where the EIS is non-negligible, it is consistently larger at the critical word (tossed in (3), thrown in (4a)) in the COHERENT condition than in the INCOHERENT condition. Across a range of eight noise levels, 67% of sentence pairs had a higher EIS in the COHERENT condition than in the INCOHERENT condition. Furthermore, the cases where the INCOHERENT condition had a larger EIS occurred only for noise levels below 1.1 and above 3.6, and the maximum such EIS was quite small, at 0.067. Overall, the model's behavior is consistent with the experimental results of Tabor et al. (2004) , and can be explained through the intuition described at the end of Section 4.1.", "cite_spans": [ { "start": 2218, "end": 2241, "text": "(Allauzen et al., 2007)", "ref_id": "BIBREF0" }, { "start": 3210, "end": 3229, "text": "Tabor et al. (2004)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 952, "end": 960, "text": "Figure 4", "ref_id": "FIGREF1" }, { "start": 2244, "end": 2252, "text": "Figure 5", "ref_id": "FIGREF2" }, { "start": 2618, "end": 2626, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Experimental Verification", "sec_num": "4.3" }, { "text": "In this paper we have outlined a simple model of rational sentence comprehension under uncertain input and explored some of the consequences for outstanding problems in the psycholinguistic literature. The model proposed here will require further empirical investigation in order to distinguish it from other proposals that have been made in the literature, but if our proposal turns out to be correct it has important consequences for both the theory of language processing and cognition more generally. Most notably, it furthers the case for rationality in sentence processing; and it eliminates one of the longest-standing modularity hypotheses implicit in work on the cognitive science of language: a partition between systems of word recognition and sentence comprehension (Fodor, 1983) . Unlike the pessimistic picture originally painted by Fodor, however, the interactivist picture resulting from our model's joint inference over possible word strings and structures points to many rich details that still need to be filled in. These include questions such as what kernel functions best account for human comprehenders' modeling of noise in linguistic input, and what kinds of algorithms might allow representations with uncertain input to be computed incrementally.", "cite_spans": [ { "start": 778, "end": 791, "text": "(Fodor, 1983)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The present work could also be extended in several more technical directions. Perhaps most notable is the problem of the normalization constant for the posterior distribution over word strings and structures; this problem was circumvented via a k-best approach in Section 3 and by removing loops from the Levenshtein-distance kernel in Section 4. We believe, however, that a more satisfactory solution may exist via sampling from the posterior distribution over trees and strings. This may be possible either by estimating normalizing constants for the posterior grammar using iterative weight propagation and using them to obtain proper production rule probabilities (Chi, 1999; Smith and Johnson, 2007) , or by using reversible-jump Markov-chain Monte Carlo (MCMC) techniques to sample from the posterior (Green, 1995) , and estimating the normalizing constant with annealing-based techniques (Gelman and Meng, 1998) or nested sampling (Skilling, 2004) . Scaling the model up for use with treebank-size grammars is another area for technical improvement.", "cite_spans": [ { "start": 668, "end": 679, "text": "(Chi, 1999;", "ref_id": "BIBREF5" }, { "start": 680, "end": 704, "text": "Smith and Johnson, 2007)", "ref_id": "BIBREF34" }, { "start": 807, "end": 820, "text": "(Green, 1995)", "ref_id": "BIBREF15" }, { "start": 938, "end": 954, "text": "(Skilling, 2004)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Finally, we note that the model here could potentially find practical application in grammar correction. Although the noisy channel has been in use for many years in spelling correction, our model could be used more generally for grammar corrections, including insertions, deletions, and (with new noise functions) potentially changes in word order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "For purposes of computing the Levenshtein distance between words, the epsilon label \u01eb is considered to be a zerolength letter string.3 The Levenshtein-distance kernel can be seen to be symmetric in w, w * as follows. Any path accepting w in the wFSA generated from w * involves the following non-zerocost transitions: insertions w \u2032I 1...i , deletions w D 1...j , and substitutions (w \u2192 w \u2032 ) S 1...k . For each such path P , there will be exactly one path P \u2032 in the wFSA generated from w that accepts w * with insertions w D 1...j , deletions w \u2032I 1...i , and substitutions (w \u2032 \u2192 w) S 1...k . Due to the symmetry of the Levenshtein distance, the paths P and P \u2032 will have identical costs. Therefore the kernel is indeed symmetric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This preceding context sharply distinguishes (3) from better-known, traditional garden-path sentences such as The horse raced past the barn fell, in which preceding context cannot be used to correctly disambiguate the part of speech of the ambiguous verb raced.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Technically, this involves the following transformation of a Levenshtein-distance wFSA. First, eliminate all loop arcs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The number of states added had little effect on results, so long as at least as many states were added as words remained in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "OpenFst: A general and efficient weighted finite-state transducer library", "authors": [ { "first": "C", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "M", "middle": [], "last": "Riley", "suffix": "" }, { "first": "J", "middle": [], "last": "Schalkwyk", "suffix": "" }, { "first": "W", "middle": [], "last": "Skut", "suffix": "" }, { "first": "M", "middle": [], "last": "Mohri", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Ninth International Conference on Implementation and Application of Automata", "volume": "4783", "issue": "", "pages": "11--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allauzen, C., Riley, M., Schalkwyk, J., Skut, W., and Mohri, M. (2007). OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the Ninth International Confer- ence on Implementation and Application of Au- tomata, (CIAA 2007), volume 4783 of Lecture Notes in Computer Science, pages 11-23. Springer. http://www.openfst.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Incremental interpretation at verbs: restricting the domain of subsequent reference", "authors": [ { "first": "G", "middle": [ "T" ], "last": "Altmann", "suffix": "" }, { "first": "Y", "middle": [], "last": "Kamide", "suffix": "" } ], "year": 1999, "venue": "Cognition", "volume": "73", "issue": "3", "pages": "247--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Altmann, G. T. and Kamide, Y. (1999). Incremental in- terpretation at verbs: restricting the domain of subse- quent reference. Cognition, 73(3):247-264.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Smooth Signal Redundancy Hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech", "authors": [ { "first": "M", "middle": [], "last": "Aylett", "suffix": "" }, { "first": "A", "middle": [], "last": "Turk", "suffix": "" } ], "year": 2004, "venue": "Language and Speech", "volume": "47", "issue": "1", "pages": "31--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aylett, M. and Turk, A. (2004). The Smooth Sig- nal Redundancy Hypothesis: A functional explanation for relationships between redundancy, prosodic promi- nence, and duration in spontaneous speech. Language and Speech, 47(1):31-56.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "On formal properties of simple phrase structure grammars", "authors": [ { "first": "Y", "middle": [], "last": "Bar-Hillel", "suffix": "" }, { "first": "M", "middle": [], "last": "Perles", "suffix": "" }, { "first": "E", "middle": [], "last": "Shamir", "suffix": "" } ], "year": 1964, "venue": "Language and Information: Selected Essays on their Theory and Application", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bar-Hillel, Y., Perles, M., and Shamir, E. (1964). On for- mal properties of simple phrase structure grammars. In Language and Information: Selected Essays on their Theory and Application. Addison-Wesley.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Statistical parsing with a contextfree grammar and word statistics", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1997, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "598--603", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E. (1997). Statistical parsing with a context- free grammar and word statistics. In Proceedings of AAAI, pages 598-603.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical properties of probabilistic context-free grammars", "authors": [ { "first": "Z", "middle": [], "last": "Chi", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "1", "pages": "131--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi, Z. (1999). Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25(1):131-160.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Thematic roles assigned along the garden path linger", "authors": [ { "first": "K", "middle": [], "last": "Christianson", "suffix": "" }, { "first": "A", "middle": [], "last": "Hollingworth", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Halliwell", "suffix": "" }, { "first": "F", "middle": [], "last": "Ferreira", "suffix": "" } ], "year": 2001, "venue": "Cognitive Psychology", "volume": "42", "issue": "", "pages": "368--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christianson, K., Hollingworth, A., Halliwell, J. F., and Ferreira, F. (2001). Thematic roles assigned along the garden path linger. Cognitive Psychology, 42:368- 407.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Headdriven parsing for word lattices", "authors": [ { "first": "C", "middle": [], "last": "Collins", "suffix": "" }, { "first": "B", "middle": [], "last": "Carpenter", "suffix": "" }, { "first": "Penn", "middle": [], "last": "", "suffix": "" }, { "first": "G", "middle": [], "last": "", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, C., Carpenter, B., and Penn, G. (2004). Head- driven parsing for word lattices. In Proceedings of ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Head-Driven Statistical Models for Natural Language Parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. (1999). Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, University of Pennsylvania.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Elements of Information Theory", "authors": [ { "first": "T", "middle": [], "last": "Cover", "suffix": "" }, { "first": "J", "middle": [], "last": "Thomas", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cover, T. and Thomas, J. (1991). Elements of Information Theory. John Wiley.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Good-enough representations in language comprehension", "authors": [ { "first": "F", "middle": [], "last": "Ferreira", "suffix": "" }, { "first": "V", "middle": [], "last": "Ferraro", "suffix": "" }, { "first": "K", "middle": [ "G D" ], "last": "Bailey", "suffix": "" } ], "year": 2002, "venue": "Current Directions in Psychological Science", "volume": "11", "issue": "", "pages": "11--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ferreira, F., Ferraro, V., and Bailey, K. G. D. (2002). Good-enough representations in language comprehen- sion. Current Directions in Psychological Science, 11:11-15.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Modularity of Mind", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Fodor", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fodor, J. A. (1983). The Modularity of Mind. MIT Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Simulating normalizing constants: from importance sampling to bridge sampling to path sampling", "authors": [ { "first": "A", "middle": [], "last": "Gelman", "suffix": "" }, { "first": "X.-L", "middle": [], "last": "Meng", "suffix": "" } ], "year": 1998, "venue": "Statistical Science", "volume": "13", "issue": "2", "pages": "163--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gelman, A. and Meng, X.-L. (1998). Simulating nor- malizing constants: from importance sampling to bridge sampling to path sampling. Statistical Science, 13(2):163-185.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Entropy rate constancy in text", "authors": [ { "first": "D", "middle": [], "last": "Genzel", "suffix": "" }, { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Genzel, D. and Charniak, E. (2002). Entropy rate con- stancy in text. In Proceedings of ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Variation of entropy and parse trees of sentences as a function of the sentence number", "authors": [ { "first": "D", "middle": [], "last": "Genzel", "suffix": "" }, { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2003, "venue": "Empirical Methods in Natural Language Processing", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Genzel, D. and Charniak, E. (2003). Variation of entropy and parse trees of sentences as a function of the sen- tence number. In Empirical Methods in Natural Lan- guage Processing, volume 10.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Reversible jump Markov chain Monte Carlo and Bayesian model determination", "authors": [ { "first": "P", "middle": [ "J" ], "last": "Green", "suffix": "" } ], "year": 1995, "venue": "Biometrika", "volume": "82", "issue": "", "pages": "711--732", "other_ids": {}, "num": null, "urls": [], "raw_text": "Green, P. J. (1995). Reversible jump Markov chain Monte Carlo and Bayesian model determination. Biometrika, 82:711-732.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A probabilistic Earley parser as a psycholinguistic model", "authors": [ { "first": "J", "middle": [], "last": "Hale", "suffix": "" } ], "year": 2001, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "159--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hale, J. (2001). A probabilistic Earley parser as a psy- cholinguistic model. In Proceedings of NAACL, vol- ume 2, pages 159-166.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Language modeling using efficient best-first bottom-up parsing", "authors": [ { "first": "K", "middle": [], "last": "Hall", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the IEEE workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hall, K. and Johnson, M. (2003). Language modeling using efficient best-first bottom-up parsing. In Pro- ceedings of the IEEE workshop on Automatic Speech Recognition and Understanding.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Attention shifting for parsing speech", "authors": [ { "first": "K", "middle": [], "last": "Hall", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hall, K. and Johnson, M. (2004). Attention shifting for parsing speech. In Proceedings of ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Better k-best parsing", "authors": [ { "first": "L", "middle": [], "last": "Huang", "suffix": "" }, { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the International Workshop on Parsing Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, L. and Chiang, D. (2005). Better k-best parsing. In Proceedings of the International Workshop on Pars- ing Technologies.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bayesian surprise attracts human attention", "authors": [ { "first": "L", "middle": [], "last": "Itti", "suffix": "" }, { "first": "P", "middle": [], "last": "Baldi", "suffix": "" } ], "year": 2005, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Itti, L. and Baldi, P. (2005). Bayesian surprise attracts human attention. In Advances in Neural Information Processing Systems.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A TAG-based noisy channel model of speech repairs", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johnson, M. and Charniak, E. (2004). A TAG-based noisy channel model of speech repairs. In Proceed- ings of ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A probabilistic model of lexical and syntactic access and disambiguation", "authors": [ { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 1996, "venue": "Cognitive Science", "volume": "20", "issue": "2", "pages": "137--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jurafsky, D. (1996). A probabilistic model of lexical and syntactic access and disambiguation. Cognitive Sci- ence, 20(2):137-194.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The entropy rate principle as a predictor of processing effort: An evaluation against eyetracking data", "authors": [ { "first": "F", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "317--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keller, F. (2004). The entropy rate principle as a pre- dictor of processing effort: An evaluation against eye- tracking data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 317-324, Barcelona.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Mr. Chips: An ideal-observer model of reading", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Legge", "suffix": "" }, { "first": "T", "middle": [ "S" ], "last": "Klitz", "suffix": "" }, { "first": "B", "middle": [ "S" ], "last": "Tjan", "suffix": "" } ], "year": 1997, "venue": "Psychological Review", "volume": "104", "issue": "3", "pages": "524--553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Legge, G. E., Klitz, T. S., and Tjan, B. S. (1997). Mr. Chips: An ideal-observer model of reading. Psycho- logical Review, 104(3):524-553.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Expectation-based syntactic comprehension", "authors": [ { "first": "R", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2008, "venue": "Cognition", "volume": "106", "issue": "", "pages": "1126--1177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levy, R. (2008). Expectation-based syntactic compre- hension. Cognition, 106:1126-1177.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Tregex and Tsurgeon: tools for querying and manipulating tree data structures", "authors": [ { "first": "R", "middle": [], "last": "Levy", "suffix": "" }, { "first": "G", "middle": [], "last": "Andrew", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2006 conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levy, R. and Andrew, G. (2006). Tregex and Tsurgeon: tools for querying and manipulating tree data struc- tures. In Proceedings of the 2006 conference on Lan- guage Resources and Evaluation.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Speakers optimize information density through syntactic reduction", "authors": [ { "first": "R", "middle": [], "last": "Levy", "suffix": "" }, { "first": "T", "middle": [ "F" ], "last": "Jaeger", "suffix": "" } ], "year": 2007, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Levy, R. and Jaeger, T. F. (2007). Speakers optimize in- formation density through syntactic reduction. In Ad- vances in Neural Information Processing Systems.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Finite-state transducers in language and speech processing", "authors": [ { "first": "M", "middle": [], "last": "Mohri", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "2", "pages": "269--311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohri, M. (1997). Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269-311.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A Bayesian model predicts human parse preference and reading time in sentence processing", "authors": [ { "first": "S", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Advances in Neural Information Processing Systems", "volume": "14", "issue": "", "pages": "59--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Narayanan, S. and Jurafsky, D. (2002). A Bayesian model predicts human parse preference and reading time in sentence processing. In Advances in Neural Informa- tion Processing Systems, volume 14, pages 59-65.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Probabilistic parsing as intersection", "authors": [ { "first": "M.-J", "middle": [], "last": "Nederhof", "suffix": "" }, { "first": "G", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the International Workshop on Parsing Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nederhof, M.-J. and Satta, G. (2003). Probabilistic pars- ing as intersection. In Proceedings of the International Workshop on Parsing Technologies.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Eye movements in reading and information processing: 20 years of research", "authors": [ { "first": "K", "middle": [], "last": "Rayner", "suffix": "" } ], "year": 1998, "venue": "Psychological Bulletin", "volume": "124", "issue": "3", "pages": "372--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rayner, K. (1998). Eye movements in reading and infor- mation processing: 20 years of research. Psychologi- cal Bulletin, 124(3):372-422.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "TGrep2 User Manual", "authors": [ { "first": "D", "middle": [], "last": "Rohde", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohde, D. (2005). TGrep2 User Manual, version 1.15 edition.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Nested sampling", "authors": [ { "first": "J", "middle": [], "last": "Skilling", "suffix": "" }, { "first": "R", "middle": [], "last": "Fischer", "suffix": "" }, { "first": "R", "middle": [], "last": "Preuss", "suffix": "" }, { "first": "U", "middle": [], "last": "Toussaint", "suffix": "" } ], "year": 2004, "venue": "Bayesian inference and maximum entropy methods in science and engineering, number 735 in AIP Conference Proceedings", "volume": "", "issue": "", "pages": "395--405", "other_ids": {}, "num": null, "urls": [], "raw_text": "Skilling, J. (2004). Nested sampling. In Fischer, R., Preuss, R., and von Toussaint, U., editors, Bayesian in- ference and maximum entropy methods in science and engineering, number 735 in AIP Conference Proceed- ings, pages 395-405.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Weighted and probabilistic context-free grammars are equally expressive", "authors": [ { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2007, "venue": "Computational Linguistics", "volume": "33", "issue": "4", "pages": "477--491", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smith, N. A. and Johnson, M. (2007). Weighted and probabilistic context-free grammars are equally ex- pressive. Computational Linguistics, 33(4):477-491.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Effects of merely local syntactic coherence on sentence processing", "authors": [ { "first": "W", "middle": [], "last": "Tabor", "suffix": "" }, { "first": "B", "middle": [], "last": "Galantucci", "suffix": "" }, { "first": "D", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2004, "venue": "Journal of Memory and Language", "volume": "50", "issue": "4", "pages": "355--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tabor, W., Galantucci, B., and Richardson, D. (2004). Effects of merely local syntactic coherence on sen- tence processing. Journal of Memory and Language, 50(4):355-370.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Integration of visual and linguistic information in spoken language comprehension", "authors": [ { "first": "M", "middle": [ "K" ], "last": "Tanenhaus", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Spivey-Knowlton", "suffix": "" }, { "first": "K", "middle": [], "last": "Eberhard", "suffix": "" }, { "first": "J", "middle": [ "C" ], "last": "Sedivy", "suffix": "" } ], "year": 1995, "venue": "Science", "volume": "268", "issue": "", "pages": "1632--1634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K., and Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehen- sion. Science, 268:1632-1634.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Results for 100-best global inference, as a function of the Levenshtein distance penalty \u03bb (in bits).", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "The grammar used for the incrementalinference experiment of Section 4. Rule weights given as negative log-probabilities in bits.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "The Error Identification Signal (EIS) for (3) and (4a), as a function of the Levenshtein distance penalty \u03bb (in bits)", "num": null, "uris": null, "type_str": "figure" } } } }