{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:35:13.641461Z" }, "title": "Psycholinguistic Diagnosis of Language Models' Commonsense Reasoning", "authors": [ { "first": "Yan", "middle": [], "last": "Cong", "suffix": "", "affiliation": {}, "email": "yancong222@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural language models have attracted a lot of attention in the past few years. More and more researchers are getting intrigued by how language models encode commonsense, specifically what kind of commonsense they understand, and why they do. This paper analyzed neural language models' understanding of commonsense pragmatics (i.e., implied meanings) through human behavioral and neurophysiological data. These psycholinguistic tests are designed to draw conclusions based on predictive responses in context, making them very well suited to test word-prediction models such as BERT in natural settings. They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses. This paper adopted psycholinguistic datasets to probe language models' commonsense reasoning. Findings suggest that GPT-3's performance was mostly at chance in the psycholinguistic tasks. We also showed that DistillBERT had some understanding of the (implied) intent that's shared among most people. Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions. Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Neural language models have attracted a lot of attention in the past few years. More and more researchers are getting intrigued by how language models encode commonsense, specifically what kind of commonsense they understand, and why they do. This paper analyzed neural language models' understanding of commonsense pragmatics (i.e., implied meanings) through human behavioral and neurophysiological data. These psycholinguistic tests are designed to draw conclusions based on predictive responses in context, making them very well suited to test word-prediction models such as BERT in natural settings. They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses. This paper adopted psycholinguistic datasets to probe language models' commonsense reasoning. Findings suggest that GPT-3's performance was mostly at chance in the psycholinguistic tasks. We also showed that DistillBERT had some understanding of the (implied) intent that's shared among most people. Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions. Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper, we focus on Language Models' (LMs) performance in commonsense reasoning tasks. Different from language semantics concerning logical relations between isolated sentence meanings, we take pragmatics to be sentences' relations relying on conversational participants' commonsense, such as the basic level intent that is commonly shared among most people. Humans reason about what their interlocutor could have said but chose not to, thereby drawing various inferences. The way humans put linguistic meanings to use depends on social interaction and commonsense assumption. What about machines whose pre-trainings do not involve social interaction? To what extent do they still have this pragmatic knowledge? How do they cooperate without any forms of learning in Grice pragmatics (Grice, 1975) ? This paper attempts to answer these questions by examining transformer LMs' performance in commonsense reasoning.", "cite_spans": [ { "start": 791, "end": 804, "text": "(Grice, 1975)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on two commonsense pragmatics phenomena: (i) Presupposition (henceforth Presp), for example, by using determiner the in the utterance \"the teacher spoke to me\" most people typically presuppose the existence of such a teacher in the context; (ii) Scalar Implicature (henceforth SI), for example, by using quantifier some in \"I ate some of the cookies\", most people generally imply \"not all\". We provided linguistic perspectives about how humans compute and evaluate commonsense pragmatics. We then assessed the extent to which LMs can understand the meanings pragmatically enriched by human speakers. Moreover, we fine-tuned LMs with pragmatic inference datasets. Evaluation comparisons are reported and discussed. We make all code and test data available for additional testing 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "LMs' knowledge about syntax and semantics is relatively well studied Tenney et al., 2019; Devlin et al., 2019) . Considerably fewer studies have been done on speaker's intent: the implied meaning that's commonly shared among most people's intention. This is called Conversational Implicature in pragmatics literature (Grice, 1975) . Implicature phenomena like quantifiers some and many are tested in recent studies (Schuster et al., 2020; Jeretic et al., 2020) . The diagnostics in these studies are controlled. Most of them incorporate offline human responses to words in context such as acceptability judgment surveys.", "cite_spans": [ { "start": 69, "end": 89, "text": "Tenney et al., 2019;", "ref_id": "BIBREF15" }, { "start": 90, "end": 110, "text": "Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 317, "end": 330, "text": "(Grice, 1975)", "ref_id": null }, { "start": 415, "end": 438, "text": "(Schuster et al., 2020;", "ref_id": "BIBREF13" }, { "start": 439, "end": 460, "text": "Jeretic et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Relatively few studies include online human response in the assessment (Ettinger, 2020) . On-line measurement uses neurolinguistic equipment electroencephalogram (EEG) and Event-Related-Potentials (ERP) to record brain activity (Luck, 2012) . ERP components such as N400 wave is an event-related brain potential measured using EEG. N400 refers to a negativity peaking at about 400 milliseconds after stimulus onset. It has been used to investigate semantic processing. N400 is relevant because it's an online real-time measurement of human brain's response to different language phenomena, and it has been mostly elicited as a result of human processing sentences with semantic anomalies. Online measurement differs from offline judgments survey or cloze test in that online measurement reveals human brain's real-time sensitivity to (linguistic) cues. We examine LMs using human centered datasets that are collected through both offline and online experiments.", "cite_spans": [ { "start": 71, "end": 87, "text": "(Ettinger, 2020)", "ref_id": "BIBREF5" }, { "start": 228, "end": 240, "text": "(Luck, 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "How \"human-like\" the state-of-the-art LMs are (cognitive plausibility) has not comprehensively justified (Wang et al., 2019) . Goldstein et al. (2021) provides empirical evidence that the human brain and GPT-2 share fundamental computational principles as they process natural language. In a sense that both are engaged in continuous next-word prediction, and both represent words as a function of the previous context. Against this background, we study LMs' cognitive plausibility through examining their performance in understanding pragmatically enriched meanings, which are implied or presupposed among most people (i.e. conversational participants) to convey their intentions.", "cite_spans": [ { "start": 105, "end": 124, "text": "(Wang et al., 2019)", "ref_id": "BIBREF16" }, { "start": 127, "end": 150, "text": "Goldstein et al. (2021)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We first designed most of the tests in the form of cloze tasks, so as to test the pre-trained LMs in their most natural setting, without interference from fine-tuning. The main schema we used in this study is called the minimal pair paradigm, in which two linguistic items are in contrastive distribution, meaning the two items are identical except one single aspect. The notion of minimal pair is widely used in linguistic experiments probing the underlying structures of a linguistic utterance. Typically, one of the two items is pragmatically odd according to most people's commonsense knowledge (marked by #), relative to the other utterance in the minimal pair.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "The hypothesis and the accuracy calculation pipeline are as follows. If LMs understand commonsense intent, which gets reflected in the usage Model n params n layers DistillBERT-base-uncased 67M 6 GPT-3/InstructGPT 175.0B 96 of SI and Presp, LMs should endorse more often the pragmatically good sentence than its pragmatically odd counterpart in a minimal pair. To quantify such \"endorsement\", we calculated the percentage p of cases in which LMs favor the pragmatically good sentence over the pragmatically odd one. The extent to which LMs (dis-)favor an sentence is derived from LMs' tokenized sequence log probability (henceforth logprob). The accuracy mean for each condition (good vs. bad/so-so) is then calculated per phenomenon (SI and Presp), using the sum of percentage p divided by the number of sentences, grouped by phenomenon. DistillBERT (Sanh et al., 2019) is used, which has only the encoder transformer, It's necessary that models are able to use right-hand context for word predictions. We compare DistillBERT with another type of LMs GPT-3 (Brown et al., 2020) , which has only the decoder. We present model cards in Table (1) .", "cite_spans": [ { "start": 851, "end": 870, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF12" }, { "start": 1058, "end": 1078, "text": "(Brown et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1135, "end": 1144, "text": "Table (1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Study 1: Presupposition Our first study is built up on Singh et al. (2016) . They performed human behavioral acceptance judgment experiments using the presupposition triggers the. Participants were asked to drop out when they think the sentence stops making sense. Singh et al. (2016) 's findings show that humans think utterances make less sense relative to the controls when the presupposed information is implausible. We extracted 82 items from Singh et al. (2016) human experiments stimuli, which are already cognitively justified and freely available in their appendix. Seth went to jail/ # a restaurant on Saturday night. The guard spoke to him there for a while. presupposes that there is a unique guard in the context. Given commonsense world knowledge and the close association of guard and jail, \"Seth went to jail\" is a more likely and plausible context, thus \"a restaurant\" is marked with #. Utterance Kristen went to a restaurant/ # jail in the morning. The waiter served her there quickly. presupposes the existence of a (unique) waiter in the context. \"Kristen went to a restaurant\" is a better context in a sense that it lays out a background where there is a waiter. By contrast, jail is rarely associated with waiter, \"went to jail\" is implausible and is marked with #. It's both the uniqueness of the \"waiter\" and the relevance of the job to the place \"restaurant\" that affect the context. Singh et al. (2016) reported that in this stopsmaking-sense paradigm, human participants were near-ceiling in accepting plausible conditions: at the last region of the sentence, the acceptance rate was 95% in the plausible condition. For implausible the, by the end of the sentence, 50% dropped out since it stops making sense and most people cannot accept it. Built up on Singh et al. 2016human experiment, we evaluated LMs' sensitivity to Presp. We compared the accuracy mean of each condition, as exemplified in John went to school on Monday afternoon. The substitute teacher spoke to him there briefly. versus John went to a concert on Monday afternoon. The substitute teacher spoke to him there briefly.. The two utterances differ in only one element \"school\"/\"concert\". The former is pragmatically good relative to the latter, given that the presupposes a context where there is a teacher, and commonsense tells us that \"teacher\" and \"school\" are closer than \"teacher\" and \"concert\".", "cite_spans": [ { "start": 55, "end": 74, "text": "Singh et al. (2016)", "ref_id": "BIBREF14" }, { "start": 265, "end": 284, "text": "Singh et al. (2016)", "ref_id": "BIBREF14" }, { "start": 448, "end": 467, "text": "Singh et al. (2016)", "ref_id": "BIBREF14" }, { "start": 1409, "end": 1428, "text": "Singh et al. (2016)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "GPT-3 is evaluated by the extent to which it favors plausible cases over the implausible ones. Sequential word-by-word logprob is generated and transformed into percent. We take the sum of word level logprob averaged by sentence length to be a proxy to the sentence naturalness. Higher percent indicates that GPT-3 evaluates the sentence to be natural. DistillBERT is evaluated through critical word prediction. Noun phrase in the initial sentence is masked and taken as the critical word. (e.g., school is masked in \"John went to school. The substitute teacher spoke to him there briefly.\", whereas concert is masked in \"John went to a concert. The substitute teacher spoke to him there briefly.\". Given that human data shows preference to the plausible over the implausible, Dis-tillBERT is considered to have succeeded if the critical word is in its topK (K=5) tokens for the plausible sentence. It's also considered succeed if the critical word is NOT in BERT's topK for the implausible sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Study 2: Scalar Implicature According to Nieuwland et al. 2010, relative clauses can make implicatures unnoticed by most people in sentence processing. Table ( 2) shows that there is a prag-matic violation in (a) if conversation participant actively draws pragmatic inference that \"some (but not all)\" office buildings have desks. However, this violation is left unnoticed in (a) due to the presence of the relative clause. (c) is relatively bad and implausible compared to (d): the violation in (c) is noticed due to the absence of a relative clause. Note that Nieuwland et al. (2010) considered the Communication sub-scale of the Autism-Spectrum Quotient questionnaire (AQ) ( Baron-Cohen et al., 1994 , 2001 Baron-Cohen, 2008) to be a proxy to be an individual's pragmatic skills. According to Nieuwland et al. 2010, the AQ quantifies pragmatic capabilities on a continuum from autism to typicality.", "cite_spans": [ { "start": 678, "end": 702, "text": "Baron-Cohen et al., 1994", "ref_id": "BIBREF2" }, { "start": 703, "end": 709, "text": ", 2001", "ref_id": "BIBREF1" }, { "start": 710, "end": 728, "text": "Baron-Cohen, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 152, "end": 159, "text": "Table (", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Nieuwland et al. 2010reported that only pragmatically skilled participants (i.e., lower autism scores) are sensitive to the pragmatic violation in (c) (r=-.53, p=0.003). For (a), in which the implicature is left unnoticed, so is the violation. There is thus no significant difference between the pragmatically skilled participants and those who have high autism scores (r=-.29, p=0.13). Overall pragmatically skilled people are good at generating robust pragmatic inferences that some implies not all, which gives rise to larger N400 when the utterance is pragmatically bad -N400 is a verified ERP elicited by anomaly stimuli (Luck, 2012).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We extracted 168 items from Nieuwland et al. (2010). Some examples of items from their data are \"Some people have lungs/pets, which require good care\". GPT-3 is used for sequential word prediction. Using sum of token level logprob averaged by sentence length, we examine if there is a difference with and without the SI being noticed. GPT-3 is considered succeed if the plausible sentence mean is higher (hence more favorable) than the soso/unacceptable sentence mean. We use masked language models like DistillBERT for critical word prediction. We masked quantifiers and take some as the critical word for (a,b,d ). We take all as the critical word for (c), because SI is noticed and all is commonsense intent. Now that (a,b,c,d) are all not implausible, BERT is marked as succeed if the critical word is in its top5 tokens list.", "cite_spans": [], "ref_spans": [ { "start": 607, "end": 613, "text": "(a,b,d", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Sanity check One may wonder to what extent LM is merely leveraging nouns joint-probability. This motivates us to check whether the test datasets contain enough noun co-occurrence patterns that could make the LMs find a likelihood pattern rather than actually reason to conclude which sentence is more plausible. For instance, the co-occurrence of office-buildings and desks in the SI good pair seems to be more frequently seen than that of officebuildings and plants in the bad pair, since plants are not essential, but desks are. Similarly, for the Presp stimuli, it appears that humans tend to associate jail with guard more frequently than they do so for restaurant and guard. To address these confounding factors, we use n-gram to calculate joint-probability (Yin et al., 2016) . Results show that 70% of the SI and 50% of the Presp stimuli show higher co-occurrence probability in the 'good' sentence than in the 'bad' sentence 2 .", "cite_spans": [ { "start": 763, "end": 781, "text": "(Yin et al., 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "In order to examine how to improve LMs' accuracy in these downstream tasks, and to further evaluate pre-trained LMs versus fine-tuned LMs, we finetuned DistillBERT-base-uncased with the ImpPress dataset (Jeretic et al., 2020) . It consists of >25k semi-automatically generated sentence pairs illustrating well-studied commonsense pragmatic inference types. 14100 tagged utterance pairs were used in the training of Presp, and 1410 tagged pairs for testing. Here is the input representation: sentence 1 Victoria's mall that has hurt Sam might upset Helen.; sentence 2 Victoria doesn't have exactly one mall that has hurt Sam.; Label contradiction. As to SI, 6000 tagged utterance pairs were used for training and 600 for testing. Here is the input representation: sentence 1 The teacher resembles some sketches.; sentence 2 The teacher doesn't resemble all sketches.; Label entailment. We fine-tuned DistillBERT-base-uncased on an Apple M1 CPU for 3 epochs. We used a batch size 64 of and optimized using Adam (Kingma and Ba, 2014) with betas=(0.9,0.999), with a learning rate 2 This would seem to raise questions about the strength of the conclusions being drawn (c.f. section5) -it seems that LMs merely leverage co-occurrence frequency; on the other hand, it also appears that LMs' trend aligns with joint frequency -LMs does not fail the sanity check because frequency/prevalence heavily influences humans' commonsense reasoning too. of 2e-05.", "cite_spans": [ { "start": 203, "end": 225, "text": "(Jeretic et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Fine-tuning DistillBERT with ImpPres", "sec_num": "4" }, { "text": "Error bar in Fig.1 shows DistillBERT does not seem to have difficulty detecting Presp, and finetuning slightly decreases its performance. This is likely due to the fact that Singh et al. (2016) data is not formatted the same as the ImpPress training data. Fine-tuning might have misled DistillBERT. Regarding SI, fine-tuning significantly increases LMs' performance, indicating that the ImpPress dataset is a good candidate for improving LMs' sensitivity to commonsense SIs. Error bar in Fig.2 indicates that GPT-3 is slightly better in detecting SI than in Presp, but overall GPT-3 is not good at the psycholinguistic task. This maybe because GPT-3 has a different architecture. LMs performance aligns with n-gram baseline in that overall the SI dataset is less challenging than the Presp: 70% of SI dataset shows the favorable co-occurrence direction: the pair tagged as 'good' also shows higher nouns co-occurrence rate than the 'bad' pair does. The Presp dataset is less helpful (50%).", "cite_spans": [ { "start": 174, "end": 193, "text": "Singh et al. (2016)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 13, "end": 18, "text": "Fig.1", "ref_id": "FIGREF0" }, { "start": 488, "end": 493, "text": "Fig.2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Evaluations and discussion", "sec_num": "5" }, { "text": "It's worth noting that it's not clear if we can make a direct comparison between human decisions and LMs' rates, especially for the SI cases. Nieuwland et al. (2010) suggests that for humans, the informative and pragmatically good statements elicited larger N400 ERPs than underinformative and pragmatically bad statements. However, this does not directly transfer to the accuracy mean metric we used for LMs. All Fig.2 showed is that GPT-3's performance is roughly at chance, with respect to accuracy mean. For future studies, we plan to conduct parallel human studies to collect baseline human decision rates.", "cite_spans": [ { "start": 142, "end": 165, "text": "Nieuwland et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 414, "end": 419, "text": "Fig.2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Evaluations and discussion", "sec_num": "5" }, { "text": "Regarding LMs evaluation analysis, our study shows that in order to probe commonsense knowledge from LMs, understand their reasoning mechanisms, and identify their limitations for AI applications due to the lack of commonsense knowledge, we need to carefully consider how to prompt the pre-trained LMs. For masked LMs such as Dis-tillBERT, our results suggest that an appropriate method to examine how 'human-like' LMs are is to mask the same token as psycholinguists do in their behavioral/neural experiments with humans, and keep the same contextual information, so that the experiment setting is as close to human experiments as possible. As to unidirectional LMs like GPT-3, they read in sentence using almost the same fundamental mechanisms as humans do, we thus took sentence to be a unit to derive logprob. How much GPT-3 like the sentence is directly reflected in its sentence logprob. It's crucial to use different metrics for BERT and GPT-3 to avoid the pitfall of comparing the two with the same metrics, as they are trained very differently, and a perplexity comparison would be inconclusive. Our study has some limitations. Although we mention multiple times that these pragmatics often exist in conversations, the actual datasets we used are not conversational. For future work, we hope to see how LMs perform in a conversation scenario in terms of commonsense pragmatics. This could give us a better grasp of LMs' competence at the conversational level of language understanding. For the current work, our motivation of using non-conversational human data for conversational implicature is that LMs are not trained the same way through many dialogues, but rather with text found on the web. Additionally, we acknowledge that there were some glitches in DistillBERT's SI evaluation setting. BERT is considered succeed as long as the critical word is in its topK. By not penalizing that some can be above all in the case where both would be in the topK choices, we accept LM's choice as \"correct\" white it isn't. It's also not very surprising that all doesn't show up as much as other options in BERT's topK choices for scenarios that all is the commonsense intent, given that LM might generate adjectives but not quantifiers to modify the following noun. It's likely that this has nothing to do with the implication, nevertheless they still make sense considering that the LM's learning algorithm uses masked loss. For future research, we hope to get more valid conclusions through directly comparing whether all is relatively more likely than some.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluations and discussion", "sec_num": "5" }, { "text": "Humans show no difficulty in using commonsense knowledge to reason about daily conversations. By contrast, the extent to which LMs are sensitive to commonsense reasoning has remained an elusive research question in AI research for decades. Here, we provide an approach for commonsense reasoning tasks: incorporating online and offline psycholinguistic datasets into LMs evaluation. Using well-controlled task design and high resolution neurophysiology equipment, psycholinguistics studies all kinds of implicit meanings in natural language. To examine how 'human-like' LMs can be, human data is the key. These methods can improve the interpretability and explainability of neural models for reasoning about implied yet commonsense message.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluations and discussion", "sec_num": "5" }, { "text": "To sum up, our paper aims to evaluate Distill-BERT and GPT-3's ability to make human-like pragmatic inferences, such as SI and Presp, through human behavioral and neural data. Findings show psycholinguistic datasets can help get a good grasp of LMs' accuracy in detecting commonsense reasoning. Our study adopted a theory-supported lens for investigating the often vaguely-defined \"commonsense\", and illustrated how to establish connection between commonsense reasoning in NLP and pragmatic semantics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluations and discussion", "sec_num": "5" }, { "text": "https://github.com/yancong222/ Pragamtics-Commonsense-LMs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Autism, hypersystemizing, and truth", "authors": [ { "first": "Simon", "middle": [], "last": "Baron-Cohen", "suffix": "" } ], "year": 2008, "venue": "Quarterly Journal of Experimental Psychology", "volume": "61", "issue": "1", "pages": "64--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Baron-Cohen. 2008. Autism, hypersystemiz- ing, and truth. Quarterly Journal of Experimental Psychology, 61(1):64-75.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The autism-spectrum quotient (aq): Evidence from asperger syndrome/high-functioning autism, malesand females, scientists and mathematicians", "authors": [ { "first": "Simon", "middle": [], "last": "Baron-Cohen", "suffix": "" }, { "first": "Sally", "middle": [], "last": "Wheelwright", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Skinner", "suffix": "" }, { "first": "Joanne", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Emma", "middle": [], "last": "Clubley", "suffix": "" } ], "year": 2001, "venue": "Journal of autism and developmental disorders", "volume": "31", "issue": "1", "pages": "5--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Baron-Cohen, Sally Wheelwright, Richard Skin- ner, Joanne Martin, and Emma Clubley. 2001. The autism-spectrum quotient (aq): Evidence from as- perger syndrome/high-functioning autism, malesand females, scientists and mathematicians. Journal of autism and developmental disorders, 31(1):5-17.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Understanding other minds: Perspectives from autism. In Most of the chapters in this book were presented in draft form at a workshop in Seattle", "authors": [ { "first": "Simon", "middle": [ "Ed" ], "last": "Baron-Cohen", "suffix": "" }, { "first": "Helen", "middle": [ "Ed" ], "last": "Tager-Flusberg", "suffix": "" }, { "first": "Donald J", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simon Ed Baron-Cohen, Helen Ed Tager-Flusberg, and Donald J Cohen. 1994. Understanding other minds: Perspectives from autism. In Most of the chapters in this book were presented in draft form at a workshop in Seattle, Apr 1991. Oxford University Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Language models are few-shot learners", "authors": [ { "first": "Tom", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Ryder", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Subbiah", "suffix": "" }, { "first": "Jared", "middle": [ "D" ], "last": "Kaplan", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Dhariwal", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Pranav", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "Girish", "middle": [], "last": "Sastry", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Askell", "suffix": "" } ], "year": 2020, "venue": "Advances in neural information processing systems", "volume": "33", "issue": "", "pages": "1877--1901", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models", "authors": [ { "first": "Allyson", "middle": [], "last": "Ettinger", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "34--48", "other_ids": { "DOI": [ "10.1162/tacl_a_00298" ] }, "num": null, "urls": [], "raw_text": "Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34-48.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Thinking ahead: spontaneous prediction in context as a keystone of language in humans and machines. bioRxiv", "authors": [ { "first": "Ariel", "middle": [], "last": "Goldstein", "suffix": "" }, { "first": "Zaid", "middle": [], "last": "Zada", "suffix": "" }, { "first": "Eliav", "middle": [], "last": "Buchnik", "suffix": "" }, { "first": "Mariano", "middle": [], "last": "Schain", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Price", "suffix": "" }, { "first": "Bobbi", "middle": [], "last": "Aubrey", "suffix": "" }, { "first": "A", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Nastase", "suffix": "" }, { "first": "Dotan", "middle": [], "last": "Feder", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Emanuel", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "2020--2032", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ariel Goldstein, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A Nas- tase, Amir Feder, Dotan Emanuel, Alon Cohen, et al. 2021. Thinking ahead: spontaneous prediction in context as a keystone of language in humans and machines. bioRxiv, pages 2020-12.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Are natural language inference models IMPPRESsive? Learning IMPlicature and PRESupposition", "authors": [ { "first": "Paloma", "middle": [], "last": "Jeretic", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Suvrat", "middle": [], "last": "Bhooshan", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "8690--8705", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.768" ] }, "num": null, "urls": [], "raw_text": "Paloma Jeretic, Alex Warstadt, Suvrat Bhooshan, and Adina Williams. 2020. Are natural language in- ference models IMPPRESsive? Learning IMPli- cature and PRESupposition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8690-8705, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Event-related potentials", "authors": [ { "first": "J", "middle": [], "last": "Steven", "suffix": "" }, { "first": "", "middle": [], "last": "Luck", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven J Luck. 2012. Event-related potentials.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "On the incrementality of pragmatic processing: An erp investigationof informativeness and pragmatic abilities", "authors": [ { "first": "S", "middle": [], "last": "Mante", "suffix": "" }, { "first": "Tali", "middle": [], "last": "Nieuwland", "suffix": "" }, { "first": "Gina", "middle": [ "R" ], "last": "Ditman", "suffix": "" }, { "first": "", "middle": [], "last": "Kuperberg", "suffix": "" } ], "year": 2010, "venue": "Journal of Memory and Language", "volume": "63", "issue": "", "pages": "324--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mante S. Nieuwland, Tali Ditman, and Gina R. Ku- perberg. 2010. On the incrementality of pragmatic processing: An erp investigationof informativeness and pragmatic abilities. Journal of Memory and Language, 63:324-346.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.01108" ] }, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Harnessing the linguistic signal to predict scalar inferences", "authors": [ { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Yuxing", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Degen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5387--5403", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.479" ] }, "num": null, "urls": [], "raw_text": "Sebastian Schuster, Yuxing Chen, and Judith De- gen. 2020. Harnessing the linguistic signal to predict scalar inferences. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5387-5403, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Accommodating presuppositions is inappropriate in implausible contexts", "authors": [ { "first": "Raj", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Evelina", "middle": [], "last": "Fedorenko", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Mahowald", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Gibson", "suffix": "" } ], "year": 2016, "venue": "Cognitive Science", "volume": "40", "issue": "", "pages": "607--634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raj Singh, Evelina Fedorenko, Kyle Mahowald, and Edward Gibson. 2016. Accommodating presup- positions is inappropriate in implausible contexts. Cognitive Science, 40:607-634.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "BERT rediscovers the classical NLP pipeline", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4593--4601", "other_ids": { "DOI": [ "10.18653/v1/P19-1452" ] }, "num": null, "urls": [], "raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy. Association for Com- putational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman- preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stick- ier benchmark for general-purpose language under- standing systems. Advances in neural information processing systems, 32.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "BLiMP: A benchmark of linguistic minimal pairs for English", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Anhad", "middle": [], "last": "Mohananey", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Sheng-Fu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Society for Computation in Linguistics 2020", "volume": "", "issue": "", "pages": "409--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: A benchmark of linguis- tic minimal pairs for English. In Proceedings of the Society for Computation in Linguistics 2020, pages 409-410, New York, New York. Association for Com- putational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Abcnn: Attention-based convolutional neural network for modeling sentence pairs", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "259--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2016. Abcnn: Attention-based convolu- tional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics, 4:259-272.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Evaluate BERT with human data. DistillBERT is used for critical word prediction. FT: fine-tuned." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Evaluate GPT-3 with human data. GPT-3 is used for sequential word prediction." }, "TABREF0": { "html": null, "text": "(pre-trained LMs) Model cards", "num": null, "content": "", "type_str": "table" }, "TABREF1": { "html": null, "text": "Some] office buildings have desks that are covered with dust. SI unnoticed Plausible (b) [Some] office buildings have plants that are covered with dust. SI unnoticed Implausible (c) [Some] office buildings have desks and can become dusty. SI noticed Plausible (d) [Some] office buildings have plants and can become dusty. SI noticed", "num": null, "content": "
Plausibility ExampleLabel
So-so(a) [
", "type_str": "table" }, "TABREF2": { "html": null, "text": "Datasets and examples used in SI evaluation (Nieuwland et al., 2010)", "num": null, "content": "", "type_str": "table" } } } }