{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:54:25.310008Z"
},
"title": "BLiMP: The Benchmark of Linguistic Minimal Pairs for English",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": "",
"affiliation": {},
"email": "warstadt@nyu.edu"
},
{
"first": "Alicia",
"middle": [],
"last": "Parrish",
"suffix": "",
"affiliation": {},
"email": "alicia.v.parrish@nyu.edu"
},
{
"first": "Haokun",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": "haokunliu@nyu.edu"
},
{
"first": "Anhad",
"middle": [],
"last": "Mohananey",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Wei",
"middle": [],
"last": "Peng",
"suffix": "",
"affiliation": {},
"email": "weipeng@nyu.edu"
},
{
"first": "Sheng",
"middle": [
"-"
],
"last": "Fuwang",
"suffix": "",
"affiliation": {},
"email": "shengfu.wang@nyu.edu"
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": "",
"affiliation": {},
"email": "bowman@nyu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP), 1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs-that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP), 1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs-that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Current neural networks for sentence processing rely on unsupervised pretraining tasks like language modeling. Still, it is an open question how the linguistic knowledge of state-of-the-art language models (LMs) varies across the linguistic phenomena of English. Recent studies (e.g., Linzen et al., 2016; Marvin and Linzen, 2018; have explored this question by evaluating LMs' preferences between minimal pairs of sentences differing in grammatical acceptability, as in Example 1. However, each of these studies uses a different set of metrics, and focuses on a small set of linguistic paradigms, severely limiting any possible bigpicture conclusions.",
"cite_spans": [
{
"start": 285,
"end": 305,
"text": "Linzen et al., 2016;",
"ref_id": "BIBREF41"
},
{
"start": 306,
"end": 330,
"text": "Marvin and Linzen, 2018;",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) a. The cats annoy Tim. (grammatical) b. *The cats annoys Tim. (ungrammatical) We introduce the Benchmark of Linguistic Minimal Pairs (shortened to BLiMP), a linguistically motivated benchmark for assessing the sensitivity of LMs to acceptability contrasts across a wide range of English phenomena, covering both previously studied and novel contrasts. BLiMP consists of 67 datasets automatically generated from linguist-crafted grammar templates, each containing 1,000 minimal pairs and organized by phenomenon into 12 categories. Validation with crowdworkers shows that BLiMP faithfully represents human preferences.",
"cite_spans": [
{
"start": 27,
"end": 40,
"text": "(grammatical)",
"ref_id": null
},
{
"start": 66,
"end": 81,
"text": "(ungrammatical)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use BLiMP to study several pretrained LMs: Transformer-based LMs GPT-2 (Radford et al., 2019) and Transformer-XL (Dai et al., 2019) , an LSTM LM trained by Gulordava et al. (2019) , and an n-gram LM. We evaluate whether the LM assigns a higher probability to the acceptable sentence in each minimal pair to determine which grammatical distinctions LMs are sensitive to. This gives us indirect evidence about each model's linguistic knowledge and allows us to compare models in a fine-grained way. We conclude that current neural LMs appear to acquire robust knowledge of morphological agreement and some syntactic phenomena such as ellipsis and control/ raising. They show weaker evidence of knowledge about argument structure, negative polarity item licensing, and the semantic properties of quantifiers. All models perform at or near chance on extraction islands. Overall, every model we evaluate falls short of human performance by a wide margin. GPT-2, which performs the best, performs 8 points below humans overall, though it does match or exceed human performance on specific phenomena.",
"cite_spans": [
{
"start": 74,
"end": 96,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF47"
},
{
"start": 116,
"end": 134,
"text": "(Dai et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 159,
"end": 182,
"text": "Gulordava et al. (2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In \u00a76.3 we conduct additional experiments to investigate the effect of training size on the LSTM LM and Transformer-XL's performance on BLiMP. Although we see steady improvements in overall performance, we find that LMs learn phenomenon-specific distinctions at different rates. In \u00a76.4 we consider alternative wellmotivated evaluation metrics on BLiMP, but find that they do not differ drastically from our method of comparing LM probabilities for full sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conclude that whereas models like GPT-2 appear to have significant linguistic knowledge, this knowledge is concentrated in some specific domains of English grammar. We use BLiMP to uncover several linguistic phenomena where even state-of-the-art language models clearly lack human-like knowledge, and to bring into focus those areas of grammar that future studies evaluating LMs should investigate in greater depth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Background and Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The objective of a language model is to give a probability distribution over the strings of a language. Both neural network and non-neural network architectures are used to build LMs, and neural models can be trained in a self-supervised setting without the need for labeled data. Recently, variants of neural language modeling have been shown to be a strong pretraining task for natural language processing tasks (Howard and Ruder, 2018; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019) .",
"cite_spans": [
{
"start": 414,
"end": 438,
"text": "(Howard and Ruder, 2018;",
"ref_id": "BIBREF36"
},
{
"start": 439,
"end": 459,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF45"
},
{
"start": 460,
"end": 481,
"text": "Radford et al., 2018;",
"ref_id": "BIBREF46"
},
{
"start": 482,
"end": 502,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "2.1"
},
{
"text": "The last decade has seen two major paradigm shifts in the state of the art for language modeling. First, there was a movement from models based on local n-gram statistics (see Chen and Goodman, 1999) to neural sequence models such as LSTMs (Mikolov et al., 2010) , which optimize on the task of predicting the next token. Subsequently, Transformer-based architectures employing selfattention (Vaswani et al., 2017) have outperformed LSTMs (e.g., Dai et al., 2019) . Although these shifts have resulted in stronger LMs, perplexity on large benchmark datasets like WikiText-103 (Merity et al., 2016) has remained the primary performance metric, which cannot give detailed insight into these models' knowledge of grammar. Evaluation on benchmarks like GLUE (Wang et al., 2018 , which heavily adapt language models to perform downstream tasks, is more informative, but doesn't offer broad coverage of linguistic phenomena, and doesn't necessary reflect knowledge that is already present in the LMs.",
"cite_spans": [
{
"start": 176,
"end": 199,
"text": "Chen and Goodman, 1999)",
"ref_id": "BIBREF18"
},
{
"start": 240,
"end": 262,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF44"
},
{
"start": 392,
"end": 414,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF55"
},
{
"start": 446,
"end": 463,
"text": "Dai et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 576,
"end": 597,
"text": "(Merity et al., 2016)",
"ref_id": "BIBREF43"
},
{
"start": 754,
"end": 772,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Models",
"sec_num": "2.1"
},
{
"text": "Many recent studies have searched for evidence that neural networks (NNs) learn representations that implicitly encode grammatical concepts. We refer to the ability to encode these concepts as linguistic knowledge. Some studies evaluate NNs' linguistic knowledge using probing tasks in which a classifier is trained to directly predict grammatical properties of a sentence (e.g., syntactic tree depth) or part of a sentence (e.g., part-of-speech) using only the NNs' learned representation as input (Shi et al., 2016; Adi et al., 2017; Conneau et al., 2018; Ettinger et al., 2018; Tenney et al., 2019) . We follow a complementary approach that uses acceptability judgments to address the same question without the need for training data labeled with grammatical concepts. Acceptability judgments are the main form of behavioral data used in generative linguistics to measure human linguistic competence (Chomsky, 1965; Sch\u00fctze, 1996) .",
"cite_spans": [
{
"start": 499,
"end": 517,
"text": "(Shi et al., 2016;",
"ref_id": "BIBREF52"
},
{
"start": 518,
"end": 535,
"text": "Adi et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 536,
"end": 557,
"text": "Conneau et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 558,
"end": 580,
"text": "Ettinger et al., 2018;",
"ref_id": "BIBREF28"
},
{
"start": 581,
"end": 601,
"text": "Tenney et al., 2019)",
"ref_id": "BIBREF54"
},
{
"start": 903,
"end": 918,
"text": "(Chomsky, 1965;",
"ref_id": "BIBREF20"
},
{
"start": 919,
"end": 933,
"text": "Sch\u00fctze, 1996)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Knowledge of NNs",
"sec_num": "2.2"
},
{
"text": "One branch of this literature uses minimal pairs to infer whether LMs detect specific grammatical contrasts. Table 1 summarizes linguistic phenomena studied in this work. For instance, Linzen et al. (2016) look closely at minimal pairs contrasting subject-verb agreement. Marvin and Linzen (2018) expand the investigation to negative polarity item and reflexive licensing. However, these and related studies cover a limited set of phenomena, to the exclusion of well-studied phenomena in linguistics such as control and raising, ellipsis, quantification, and countless others. This is likely due to the labor-intensive nature of collecting such targeted minimal pairs.",
"cite_spans": [
{
"start": 185,
"end": 205,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF41"
},
{
"start": 272,
"end": 296,
"text": "Marvin and Linzen (2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Linguistic Knowledge of NNs",
"sec_num": "2.2"
},
{
"text": "A related line of work evaluates neural networks on acceptability judgments in a more domaingeneral way. Corpora of sentences and their grammaticality are collected for this purpose in a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Knowledge of NNs",
"sec_num": "2.2"
},
{
"text": "Anaphora/binding Marvin and Linzen (2018) , , Warstadt et al. (2019b ) Subj.-verb agreement Linzen et al. (2016 , , Gulordava et al. (2019) , Marvin and Linzen (2018) , , Warstadt et al. (2019b) Neg. polarity items Marvin and Linzen (2018) , , Jumelet and Hupkes (2018) , , Warstadt et al. (2019a) Filler-gap/Islands , Warstadt et al. (2019b) , Zamparelli (2018, 2019) , Chaves (2020), Da Costa and Chaves (2020) Argument structure Kann et al. (2019) , Warstadt et al. (2019b) , Chowdhury and Zamparelli (2019) number of studies (Heilman et al., 2014; Lau et al., 2017; Warstadt et al., 2019b) . The most recent and comprehensive corpus is CoLA (Warstadt et al., 2019b) , containing 10k sentences covering a wide variety of linguistic phenomena provided as examples in linguistics papers and books. CoLA, which is included in the GLUE benchmark (Wang et al., 2018) , has been used to track advances in the sensitivity of reusable sentence encoding models to acceptability. Current models like BERT (Devlin et al., 2019) and T5 (Raffel et al., 2019 ) now learn to give acceptability judgments that approach or even exceed individual human agreement with CoLA.",
"cite_spans": [
{
"start": 17,
"end": 41,
"text": "Marvin and Linzen (2018)",
"ref_id": "BIBREF42"
},
{
"start": 46,
"end": 68,
"text": "Warstadt et al. (2019b",
"ref_id": "BIBREF62"
},
{
"start": 69,
"end": 111,
"text": ") Subj.-verb agreement Linzen et al. (2016",
"ref_id": null
},
{
"start": 116,
"end": 139,
"text": "Gulordava et al. (2019)",
"ref_id": "BIBREF32"
},
{
"start": 142,
"end": 166,
"text": "Marvin and Linzen (2018)",
"ref_id": "BIBREF42"
},
{
"start": 171,
"end": 194,
"text": "Warstadt et al. (2019b)",
"ref_id": "BIBREF62"
},
{
"start": 215,
"end": 239,
"text": "Marvin and Linzen (2018)",
"ref_id": "BIBREF42"
},
{
"start": 244,
"end": 269,
"text": "Jumelet and Hupkes (2018)",
"ref_id": "BIBREF37"
},
{
"start": 319,
"end": 342,
"text": "Warstadt et al. (2019b)",
"ref_id": "BIBREF62"
},
{
"start": 345,
"end": 368,
"text": "Zamparelli (2018, 2019)",
"ref_id": null
},
{
"start": 432,
"end": 450,
"text": "Kann et al. (2019)",
"ref_id": "BIBREF38"
},
{
"start": 453,
"end": 476,
"text": "Warstadt et al. (2019b)",
"ref_id": "BIBREF62"
},
{
"start": 529,
"end": 551,
"text": "(Heilman et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 552,
"end": 569,
"text": "Lau et al., 2017;",
"ref_id": "BIBREF40"
},
{
"start": 570,
"end": 593,
"text": "Warstadt et al., 2019b)",
"ref_id": "BIBREF62"
},
{
"start": 645,
"end": 669,
"text": "(Warstadt et al., 2019b)",
"ref_id": "BIBREF62"
},
{
"start": 845,
"end": 864,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF57"
},
{
"start": 998,
"end": 1019,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 1027,
"end": 1047,
"text": "(Raffel et al., 2019",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant work",
"sec_num": null
},
{
"text": "Although CoLA can provide evidence about phenomenon-specific knowledge of models, this method is limited by the need to train a supervised classifier on CoLA data prior to evaluation. This is because CoLA is designed for binary acceptability classification, and there is no generally accepted method for obtaining binary acceptability predictions from unsupervised models like LMs. 2 measure phenomenon-specific performance on CoLA for several pretrained sentence encoding models: an LSTM, GPT (Radford et al., 2018) , and BERT. However, the use of supervision prevents making strong conclusions about the sentence encoding component, since it is not possible to distinguish what the encoder knows from what is learned through supervised training on acceptability data.",
"cite_spans": [
{
"start": 382,
"end": 383,
"text": "2",
"ref_id": null
},
{
"start": 494,
"end": 516,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant work",
"sec_num": null
},
{
"text": "Evaluating LMs on minimal pairs avoids this problem, with the caveat that the LM probability of a sentence can only serve as a proxy for acceptability if confounding factors impacting a sentence's probability such as length and lexical content are controlled for. It is with these considerations in mind that we design BLiMP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant work",
"sec_num": null
},
{
"text": "BLiMP consists of 67 minimal pair paradigms, each with 1,000 sentence pairs in mainstream American English grouped into 12 categories. 3 We refer to minimal pair types as paradigms and categories as phenomena. Each paradigm is annotated for the unique contrast it isolates and the broader phenomena it is part of. We automatically generate the data from linguist-crafted grammar templates, and our automatic labels are validated with crowd-sourced human judgments.",
"cite_spans": [
{
"start": 135,
"end": 136,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Although each minimal pair type corresponds to exactly one paradigm, a particular fact about English grammar may be illustrated by multiple paradigms. For instance, the fact that certain determiners and nouns agree can be illustrated by keeping the determiner the same and changing the number marking of the noun as in the example in These casseroles disgusts Kayla. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "To create minimal pairs exemplifying a wide array of linguistic contrasts, we found it necessary to artificially generate all datasets. This ensures both that we have sufficient unacceptable examples, and that the data is fully controlled, allowing for repeated isolation of a single linguistic phenomenon (Ettinger et al., 2018) . For each paradigm, we use a generation script to sample lexical items from a vocabulary of over 3,000 items according to a template specifying linear order of the phrases in the acceptable and unacceptable sentences in each minimal pair. Our data generation scripts are publicly available. 4 We annotate these lexical items with the morphological, syntactic, and semantic features needed to enforce selectional restrictions and create grammatical and semantically felicitous sentences. All examples in a paradigm are structurally analogous up to the point required for the relevant contrast but may vary in some ways. For instance, the template for NPI LICENSING, illustrated in Table 2 , specifies that an arbitrary verb phrase needs to be generated. Accordingly, the generation script samples from the entire set of verbs and generates the required arguments on-the-fly. Thus, the structure of the sentence then depends on whether the sampled verb is transitive, clauseembedding, raising, and so forth, but that same verb phrase and its arguments are used in both pairs in the paradigm.",
"cite_spans": [
{
"start": 306,
"end": 329,
"text": "(Ettinger et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 622,
"end": 623,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1011,
"end": 1018,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Generation Procedure",
"sec_num": "3.1"
},
{
"text": "This generation procedure is not without limitations, and despite the very detailed vocabulary we use, implausible sentences are occasionally generated (e.g., Sam ran around some glaciers). In these cases, though, both the acceptable and unacceptable sentences will be equally implausible given world knowledge, so any difference in the probability assigned to them is still attributable to the intended grammatical contrast.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Generation Procedure",
"sec_num": "3.1"
},
{
"text": "The paradigms covered by BLiMP represent well-established contrasts in English morphology, syntax, and semantics. Each paradigm is grouped into one of 12 phenomena, shown in Table 2 . Examples of all 67 paradigms appear in Table 4 of the Appendix. The paradigms are selected with the constraints that they can be characterized using templates as described above and illustrated with minimal pairs of sentences equal in length 5 that differ in at most one vocabulary item.",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 223,
"end": 230,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "Although this dataset has broad coverage, it is not exhaustive. It is not possible to include every grammatical phenomenon of English, and there is no agreed-upon set of core phenomena. However, we consider frequent inclusion of a phenomenon in a syntax/semantics textbook as an informal proxy for what linguists consider to be core phenomena. We survey several syntax textbooks (e.g., Sag et al., 2003; Adger, 2003; Sportiche et al., 2013) , and find that nearly all of the phenomena in BLiMP are discussed in some source. Most of the topics that repeatedly appear in textbooks and can be represented with minimal pairs (e.g., agreement, control/raising, wh-extraction/islands, binding) are present in BLiMP. 6 We characterize the 12 phenomena in BLiMP as follows 7 :",
"cite_spans": [
{
"start": 386,
"end": 403,
"text": "Sag et al., 2003;",
"ref_id": "BIBREF50"
},
{
"start": 404,
"end": 416,
"text": "Adger, 2003;",
"ref_id": "BIBREF13"
},
{
"start": 417,
"end": 440,
"text": "Sportiche et al., 2013)",
"ref_id": "BIBREF53"
},
{
"start": 710,
"end": 711,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 ANAPHOR AGREEMENT: the requirement that reflexive pronouns like himself (a.k.a. anaphora) agree with their antecedents in person, number, gender, and animacy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 ARGUMENT STRUCTURE: the ability of different verbs to appear with different types of arguments. For instance, different verbs can appear with a direct object, participate in the causative alternation, or take an inanimate argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 BINDING: the structural relationship between a pronoun and its antecedent. All paradigms illustrate aspects of Chomsky's (1981) Principle A. Because coindexation cannot be annotated in BLiMP, Principles B and C are not illustrated.",
"cite_spans": [
{
"start": 113,
"end": 129,
"text": "Chomsky's (1981)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 CONTROL/RAISING: syntactic and semantic differences between various types of predicates that embed an infinitival VP. This includes control, raising, and toughmovement predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 DETERMINER-NOUN AGREEMENT: number agreement between demonstrative determiners (e.g., this/these) and the associated noun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 ELLIPSIS: the possibility of omitting expressions from a sentence. Because this is difficult to illustrate with sentences of equal length, our paradigms cover only special cases of noun phrase ellipsis that meet this constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 FILLER-GAP: dependencies arising from phrasal movement in, for example, whquestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 IRREGULAR FORMS: irregular morphology on English past participles (e.g., broken). We are unable to evaluate models on nonexistent forms like *breaked because such forms are out of the vocabulary for some LMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 ISLAND EFFECTS: restrictions on syntactic environments where the gap in a filler-gap dependency may occur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 NPI LICENSING: restrictions on the distribution of negative polarity items like any and ever limited to, for example, the scope of negation and only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 QUANTIFIERS: restrictions on the distribution of quantifiers. We cover two such restrictions: superlative quantifiers (e.g., at least) cannot embed under negation, and definite quantifiers and determiners cannot be subjects in existential-there constructions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "\u2022 SUBJECT-VERB AGREEMENT: subjects and present tense verbs must agree in number.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coverage",
"sec_num": "3.2"
},
{
"text": "With a vocabulary of over 3,000 words, BLiMP has by far the most lexical variation of any related generated dataset. Table 3 : Percentage accuracy of four baseline models and raw human performance on BLiMP using a forced-choice task. A random guessing baseline would achieve an accuracy of 50%.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to Related Resources",
"sec_num": "3.3"
},
{
"text": "v e r a l l A N A . A G R A R G . S T R B I N D I N G C T R L . R A I S . D -N A G R E L L I P S I S F I L L E R . G A P I R R E G U L A R I S L A N D N P I Q U A N T I F I E R S S -V A G R 5-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Related Resources",
"sec_num": "3.3"
},
{
"text": "grammatical violations, but it is not possible to control the nature or quantity of violations in the resulting sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Related Resources",
"sec_num": "3.3"
},
{
"text": "To verify that the generated sentences represent a real contrast in acceptability, we conduct human validation via Amazon Mechanical Turk. 8 Twenty separate validators rated five pairs from each of the 67 paradigms, for a total of 6,700 judgments. We restricted validators to individuals currently located in the US who self-reported as native speakers of English. To assure that our validators made a genuine effort on the task, each HIT included an attention check item and a hidden field question to catch bot-assisted humans.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Validation",
"sec_num": "3.4"
},
{
"text": "Validators were paid $0.25 for completing five judgments, which we estimate took 1-2 minutes. For each minimal pair, 20 individuals completed a forced-choice task mirroring the LMs' task; the human-determined acceptable sentence was calculated via majority vote of annotators. By this metric, we estimate aggregate human agreement with our annotations to be 96.4% overall. As a threshold of inclusion in BLiMP, the majority of validators needed to agree with BLiMP on at least 4/5 examples from each paradigm. Thus, all 67 paradigms in the public version of BLiMP passed this validation; only two additional paradigms were rejected on this criterion. We also estimate individual human agreement to be 88.6% overall using the approximately 100 annotations from each paradigm. 9 Table 3 reports individual human results (and model results) as a conservative measure of human agreement.",
"cite_spans": [],
"ref_spans": [
{
"start": 777,
"end": 784,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Validation",
"sec_num": "3.4"
},
{
"text": "GPT-2 GPT-2 (Radford et al., 2019 ) is a largescale language model using the Transformer architecture (Vaswani et al., 2017 ",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "(Radford et al., 2019",
"ref_id": "BIBREF47"
},
{
"start": 102,
"end": 123,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "We build a 5-gram LM on the English Gigaword corpus (Graff et al., 2003) , which consists of 3.1B tokens. To efficiently query n-grams we use an implementation 13 based on Heafield et al. (2013). 14",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "(Graff et al., 2003)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5-gram",
"sec_num": null
},
{
"text": "An LM's overall accuracy on BLiMP is simply the proportion of the 67,000 minimal pairs in which the model assigns a higher probability to the acceptable sentence. We report the results for all models and human evaluation in Table 3 . GPT-2 achieves the highest accuracy and the 5-gram model the lowest. All models perform well below estimated human accuracy (as described in \u00a7 3.4).",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The 5-gram model's poor performance-overall and on every individual category-indicates that BLiMP is likely not solvable from local co-occurrence statistics alone. Because we evaluate pretrained models that differ in architecture and training data, we can only speculate about what drives these differences (though see \u00a7 6.3 for a controlled ablation study on the LSTM LM). The results seem to indicate that access to training data is the main driver of performance on BLiMP for the neural models we evaluate. This may explain why Transformer-XL and the LSTM LM perform similarly in spite of differences in architecture, as both are trained on approximately 100M tokens of Wikipedia text. Relatedly, GPT-2's advantage may come from the fact that it is trained on roughly two orders of magnitude more data. Possibly, LSTMs trained on larger datasets could perform comparably to GPT-2, but such experiments are impractical because of the inefficiency of training LSTMs at this scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The results also give insight into how LM's linguistic knowledge varies by domain. Models generally perform best and closest to human level on morphological phenomena. For instance, GPT-2 performs within 2.1 points of humans on ANAPHOR AGR., DET.-NOUN AGR., and SUBJ.-VERB AGR.. The set of challenging phenomena is more diverse. ISLANDS are the hardest phenomenon by a wide margin. Only GPT-2 performs well above chance, and it remains 20 points below humans. Some semantic phenomena, specifically those involving NPI LICENSING and QUANTIFIERS, are also challenging overall. All models perform relatively poorly on ARG. STRUCTURE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion by Phenomenon",
"sec_num": "5.1"
},
{
"text": "From these results we conclude that current SotA LMs robustly encode basic facts of English agreement. This does not mean that LMs will come close to human performance for all agreement phenomena. \u00a76.1 discusses evidence that increased dependency length and the presence of agreement attractors of the kind investigated by Linzen et al. (2016) and Gulordava et al. (2019) reduce performance on agreement phenomena.",
"cite_spans": [
{
"start": 323,
"end": 343,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF41"
},
{
"start": 348,
"end": 371,
"text": "Gulordava et al. (2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion by Phenomenon",
"sec_num": "5.1"
},
{
"text": "We find, in accordance with , that LMs do represent long-distance wh-dependencies, but we also conclude that their representations differ fundamentally from humans'. Although some models approach human performance in ordinary filler-gap dependencies, they are exceptionally poor at identifying island violations overall. This finding suggests that they reliably encode long-distance dependencies in general, but not the syntactic domains in which these dependencies are blocked, though GPT-2 does perform well above chance on some paradigms of ISLAND EFFECTS. However, strong conclusions about how these models represent wh-dependencies are not possible using the forced-choice task compatible with BLiMP, and a complete assessment of syntactic islands is best addressed using a factorial design that manipulates both the presence of an island and an attempt to extract from it, as in Kush et al. (2018) or .",
"cite_spans": [
{
"start": 885,
"end": 903,
"text": "Kush et al. (2018)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion by Phenomenon",
"sec_num": "5.1"
},
{
"text": "In the semantic phenomena where models struggle (NPIS and QUANTIFIERS), violations are often attributed in semantic theories to a presupposition failure or contradiction arising from semantic composition or pragmatic reasoning (e.g., Chierchia, 2013; Ward and Birner, 1995; Geurts and Nouwen, 2007) . These abstract semantic and pragmatic factors may be difficult for LMs to learn. Marvin and Linzen also find that LSTMs largely fail to recognize NPI licensing conditions. Warstadt et al. (2019a) find that BERT (which is similar in scale to GPT-2) recognizes these conditions inconsistently in an unsupervised setting.",
"cite_spans": [
{
"start": 234,
"end": 250,
"text": "Chierchia, 2013;",
"ref_id": "BIBREF19"
},
{
"start": 251,
"end": 273,
"text": "Ward and Birner, 1995;",
"ref_id": "BIBREF59"
},
{
"start": 274,
"end": 298,
"text": "Geurts and Nouwen, 2007)",
"ref_id": "BIBREF30"
},
{
"start": 473,
"end": 496,
"text": "Warstadt et al. (2019a)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion by Phenomenon",
"sec_num": "5.1"
},
{
"text": "The weak performance on ARG. STRUCTURE is somewhat surprising, since arguments and heads are usually-though not always-adjacent (e.g., subjects and direct objects are adjacent to the verb in default English word order). However, argument structure is closely related to semantic event structure (see Marantz, 2013), which may be comparatively difficult for LMs to learn. Also, judgments about argument structure are complicated by the possibility of coercing a frequently transitive verb to be intransitive and vice versa as well as the existence of secondary meanings of verbs with different argument structures (e.g., normally intransitive boast has a transitive use as in The spa boasts 10 pools), which might make this domain somewhat more difficult for LMs. Though even with these complications, humans detect the intended contrast 90% of the time. We note that the reported difficulty of these phenomena contradicts conclusion that argument structure is one of the strongest domains for neural models. However, Warstadt and Bowman evaluate classifiers with supervision on CoLA, a large proportion of which is sentences related to argument structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion by Phenomenon",
"sec_num": "5.1"
},
{
"text": "Finally, we caution against interpreting positive results on a general phenomenon in BLiMP as proof of human-like knowledge. Although it is unlikely that GPT-2 could reach human performance on the SUBJ.-VERB AGR. paradigms without acquiring a concept of number marking that abstracts away from specific lexical items, it is difficult to rule out this possibility without accumulating different forms of evidence, for instance, by testing how it generalizes to nonce words. We take the paradigms in FILLER-GAP as a cautionary example (see Table 4 ). There are four paradigms that assess a model's sensitivity to the syntactic requirements of complementizer that versus a wh-word. We observe that all models more or less succeed when the unacceptable sentence lacks a necessary gap, but fail when it contains an illicit gap. These results suggest the models' ability to accurately detect a contrast in whether a gap is filled following a wh-word is not clearly based on a generalization about the relationship between that wh-word and its gap, as such a generalization should extend to the cases where the models currently fail to detect the correct contrast. More generally, conclusions about a model's knowledge of a particular grammatical concept can only be reached by considering several paradigms.",
"cite_spans": [],
"ref_spans": [
{
"start": 538,
"end": 545,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results and Discussion by Phenomenon",
"sec_num": "5.1"
},
{
"text": "We also ask what factors besides linguistic phenomena affect model accuracy. Figure 2 shows how sentence length, perplexity (which does not depend on length), the probability of the good sentence (which does depend on length), and confidence affect model performance. The effect of perplexity is much weaker for GPT-2 than for other models, which indicates that it is probably more robust to sentences with non-stereotypical syntax or describing unlikely scenarios. GPT-2 is the only model where accuracy increases largely monotonically with confidence. A similar relationship holds between confidence and agreement in human acceptability judgments.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 85,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Shallow Predictors of Performance",
"sec_num": "5.2"
},
{
"text": "We examine the extent to which models and humans succeed at detecting contrasts for the same linguistic phenomena. Figure 1 shows the Pearson correlation between the four LMs and humans of their accuracies on the 67 paradigms. The neural models correlate moderately with humans, with GPT-2 correlating most strongly. The n-gram model's performance correlates with humans relatively weakly. Neural models correlate with each other more strongly, suggesting neural networks share some biases that are not human-like. Transformer-XL and LSTM's high correlation of 0.9 possibly reflects their similar training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Correlation of Model and Human Performance",
"sec_num": "5.3"
},
{
"text": "The presence of intervening material can lower the ability of humans to detect agreement dependencies (Bock and Miller, 1991) . We study how intervening material affects the LMs' sensitivity to mismatches in agreement in BLiMP. First, we test for sensitivity to determiner-noun agreement with and without an intervening adjective, as in Example (2). The results are plotted in Figure 3 .",
"cite_spans": [
{
"start": 102,
"end": 125,
"text": "(Bock and Miller, 1991)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 377,
"end": 385,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Long-Distance Dependencies",
"sec_num": "6.1"
},
{
"text": "The n-gram model is the most heavily impacted, performing on average 35 points worse. This is unsurprising, since the bigram consisting of a determiner and noun is far more likely to be observed than the trigram of determiner, adjective, and noun. For the neural models, we find a weak but consistent effect, with all models performing on average between 5 and 3 points worse when there is an intervening adjective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long-Distance Dependencies",
"sec_num": "6.1"
},
{
"text": "(2) a. Ron saw that man/*men. b. Ron saw that nice man/*men.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long-Distance Dependencies",
"sec_num": "6.1"
},
{
"text": "Second, we test for sensitivity to mismatches in subject-verb agreement when an attractor noun of the opposite number intervenes. We compare attractors in relative clauses (3-b) and as part of a relational noun (3-c), following experiments by Linzen et al. (2016) and others. Again, we find that the n-gram model's performance is reduced significantly by this intervening material, suggesting the model is consistently misled by the presence of an attractor. All the neural models perform above chance with an attractor present, but GPT-2 and the LSTM perform 22 and 20 points worse when an attractor is present than when there is no attractor, while Transformer-XL's performance is reduced by only 5 points. Thus, we reproduce Linzen et al.'s finding that attractors significantly reduce LSTM LMs' sensitivity to mismatches in agreement and find evidence that this holds true of some Transformer LMs as well.",
"cite_spans": [
{
"start": 243,
"end": 263,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Long-Distance Dependencies",
"sec_num": "6.1"
},
{
"text": "(3) a. The sisters bake/*bakes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long-Distance Dependencies",
"sec_num": "6.1"
},
{
"text": "b. The sisters who met Cheryl bake/*bakes. c. The sisters of Cheryl bake/*bakes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long-Distance Dependencies",
"sec_num": "6.1"
},
{
"text": "In DET.-NOUN AGR. and SUBJ.-VERB AGR., we generate separate datasets for nouns with regular and irregular number marking, as in Example (4). All else being equal, only models with access to sub-word-level information should make any distinction between regular and irregular morphology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular vs. Irregular Agreement",
"sec_num": "6.2"
},
{
"text": "(4) a. Ron saw that nice kid/*kids. (regular) b. Ron saw that nice man/*men. (irregular)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regular vs. Irregular Agreement",
"sec_num": "6.2"
},
{
"text": "In fact, Figure 4 shows that the two sub-wordlevel models GPT-2 and Transformer-XL show little effect of irregular morphology: They perform less than 1.3 points worse on irregulars than regulars. Their high overall performance suggests that they robustly encode number features without relying on segmental cues. 15 Figure 4 : Models' performance on agreement phenomena between a determiner and noun and between a subject and verb, broken down by whether the noun/subject has a regular or irregular plural form",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 4",
"ref_id": null
},
{
"start": 316,
"end": 324,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Regular vs. Irregular Agreement",
"sec_num": "6.2"
},
{
"text": "We use BLiMP to track how a model's representation of particular phenomena varies with the quantity of training data. Using different sized subsets of Gulordava et al.'s (2019) training data, we retrain the LSTM and Transformer-XL models and evaluate their performance on BLiMP. Figure 5 shows that different phenomena have notably different learning curves across different training sizes even if the full model trained on 83M tokens achieved equivalent accuracy scores. For example, the LSTM model ultimately performs well on both IRREGULAR and ANAPHOR AGR., but requires more training to reach this level of performance for ANAPHOR AGR. These learning curve differences show how BLiMP performance dissociates from perplexity on Wikipedia data, a standard measure of LM performance: Although perplexity decreases with more training data, 16 performance on different phenomena grows at varying rates.",
"cite_spans": [
{
"start": 151,
"end": 176,
"text": "Gulordava et al.'s (2019)",
"ref_id": null
},
{
"start": 840,
"end": 842,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 279,
"end": 287,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Training size and BLiMP performance",
"sec_num": "6.3"
},
{
"text": "We conjecture that there is a sigmoid relationship between the logarithm of training set size and BLiMP performance that appears to be roughly linear at this scale. We conduct linear regression analyses to estimate the rate of increase in performance in relation to the logarithm (base 2) of dataset size. For the LSTM LM, best-fit lines for phenomena on which the model had the highest accuracy have the steepest slopes: ANAPHOR AGR. (0.0623), DET.-NOUN AGR. (0.0426), and IRREGULAR (0.039). We see the shallowest slopes on phenomena with the worst performance: NPIS (0.0078) and ISLANDS (0.0036). For Transformer-XL, we observe a similar pattern: The steepest learning curves again belong to ANAPHOR AGR. (0.0545) and DET.-NOUN AGR. (0.0405), and the shallowest to NPIS (0.0055) and ISLANDS (0.0039). Based on these values, we estimate that if log-linear improvement continues, the LSTM LM and Transformer-XL should require well over 10 20 tokens of training data to achieve human-like performance on these hardest phenomena.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training size and BLiMP performance",
"sec_num": "6.3"
},
{
"text": "We also find that increasing model size (number of parameters) is unlikely to improve performance: We evaluate four pretrained versions of GPT-2 with 117 M to 1,558 M parameters trained on WebText. All models have overall BLiMP accuracy of 0.84 \u00b1 .01%, and standard deviation among the models on each of the 12 phenomena does not exceed 0.03. This finding bolsters our earlier conclusion in \u00a75 that amount of training data has the biggest impact on BLiMP performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training size and BLiMP performance",
"sec_num": "6.3"
},
{
"text": "There are several other methods one can use to measure an LM's preference between two minimally different sentences. So far, we have considered only the full-sentence method, advocated for by Marvin and Linzen (2018) , which compares LM likelihoods of full sentences. In a followup experiment, we use two prefix methods, each of which has appeared in related prior work, that evaluate a model's preferences by comparing its prediction at a key point of divergence between the sentences. Subsets of BLiMP data are designed to be compatible with multiple methods, allowing us to conduct the first direct comparison. We find that all methods give broadly similar results when aggregating over a set of paradigms. We see no strong argument against evaluating solely using the full-sentence method, though some results diverge for specific paradigms.",
"cite_spans": [
{
"start": 192,
"end": 216,
"text": "Marvin and Linzen (2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alternate Evaluation Methods",
"sec_num": "6.4"
},
{
"text": "One-Prefix Method In the one-prefix method, used by Linzen et al. (2016) , a pair of sentences share the same initial portion of a sentence, but differ in a critical word that make them differ in grammaticality (e.g., The cat eats mice vs. The cat eat mice). The model's prediction is correct if it assigns a higher probability to the grammatical token given the shared prefix.",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "Linzen et al. (2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Alternate Evaluation Methods",
"sec_num": "6.4"
},
{
"text": "In the two-prefix method, used by , a pair of sentences differ in their initial string, and the grammaticality difference is only revealed when a shared critical word is included (e.g., The cat eats mice vs. The cats eats mice). For these paradigms, we evaluate whether the model assigns a higher probability to the critical word conditioned on the grammatical prefix than on the ungrammatical prefix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two-Prefix Method",
"sec_num": null
},
{
"text": "The prefix methods differ from the fullsentence method in two key ways: (i) they require that the acceptability of the sentence be unambiguously predictable from the critical word, but not sooner, and (ii) they are not affected by predictions made by the LM following the critical word. These values do affect the full sentence method. For example, assuming that P (are numerous) \u226b P (is numerous), a model could predict that The cats are numerous is more likely than The cats is numerous without correctly predicting that P (are|the cats) > P (is|the cats). Using prefix probabilities allows us to exclude models' use of this additional information and evaluate how the models perform when they have just enough information to judge grammaticality. Figure 6 shows that models have generally comparable accuracies across all three methods. However, there are some cases where we observe differences between these methods. For example, Transformer-XL performs much worse at BINDING, DET.-NOUN AGR., and SUBJ.-VERB AGR. in the simple LM method, suggesting that the probabilities Transformer-XL assigns to the irrelevant part at the end of the sentence very often overturn the observed preference based on probability up to the critical word. On the other hand, GPT-2 benefits from reading the whole sentence for BINDING phenomena, as its performance is better in the simple LM method than in the prefix method.",
"cite_spans": [],
"ref_spans": [
{
"start": 750,
"end": 758,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Two-Prefix Method",
"sec_num": null
},
{
"text": "We conclude that with a sufficiently diverse set of paradigms, the various metrics under consideration will give similar results. Thus, it is not problematic that BLiMP relies only on the full-sentence method, and doing so allows BLiMP to include many paradigms not compatible with either prefix method. Nonetheless, prefix methods are still valuable for detailed analysis or for studies making direct comparison to psycholinguistic theories (e.g., .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two-Prefix Method",
"sec_num": null
},
{
"text": "We have shown ways in which BLiMP can be used as tool to gain evidence about both the overall and fine-grained linguistic knowledge of language models. Like the GLUE benchmark (Wang et al., 2018) , BLiMP assigns a single overall score to an LM that summarizes its general sensitivity to minimal pair contrasts. It also provides a breakdown of LM performance by linguistic phenomenon, which can be used to draw more concrete conclusions about the kinds of grammatical features learned acquired by a given model. This kind of information is a linguistically motivated evaluation of LMs that can complement common metrics like perplexity.",
"cite_spans": [
{
"start": 176,
"end": 195,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Furthermore, the extent to which humans resemble data-driven learners like language models is debated in linguistics and cognitive science (see e.g., Chomsky, 1965; Reali and Christiansen, 2005) . In some domains, we may require the aid of innate knowledge to acquire phenomenon-specific knowledge resembling that tested in BLiMP. By evaluating whether selfsupervised learners like LMs acquire human-like grammatical acuity in a particular domain, we gather indirect evidence as to whether this phenomenon is a necessary component of humans' innate knowledge.",
"cite_spans": [
{
"start": 150,
"end": 164,
"text": "Chomsky, 1965;",
"ref_id": "BIBREF20"
},
{
"start": 165,
"end": 194,
"text": "Reali and Christiansen, 2005)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Another aim of BLiMP is to serve as a guide for future work on the linguistic evaluation of LMs. It is particularly interesting to better understand those empirical domains where current LMs appear to acquire some relevant knowledge, but still fall short of human performance. The results from BLiMP suggest that-in addition to relatively well-studied phenomena like fillergap dependencies, NPIs, and binding-argument structure remains one area where there is much to uncover about what LMs learn. More generally, as language modeling techniques continue to improve, it will be useful to have large-scale tools like BLiMP to efficiently track changes in what these models do and do not know about grammar. simple LM method. The bolded word is the critical word-the probability of the two different critical words for the acceptable and unacceptable sentences can be compared based on the same 'prefix'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "\u2022 If a sentence has a checkmark ( ) under the 2pfx column, the sentence can be used with the 2-prefix method in addition to the simple LM method. The bolded word is the critical word-the probability of that particular word can be compared based on the two different acceptable and unacceptable 'prefixes'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "https://github.com/alexwarstadt/blimp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Though seeLau et al. (2017) for some promising proposals for normalizing LM probabilities to correlate with gradient acceptability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We choose English because it is the native language of the linguists who built the grammar templates, though in the long run, creating versions of BLiMP in additional languages would allow for coverage of more phenomena and expand BLiMP's range of usefulness. We assume 1,000 pairs is sufficient to limit random noise resulting from small sample sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/alexwarstadt/data generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We define length as the number of entries from our lexicon. Some sentences in a pair contain different numbers of words because visit and drop by are each one lexical entry. Where discrepancies in number of words occur, they are generally randomly distributed across the grammatical and ungrammatical sentences in a paradigm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In line with these textbooks, we rely on stereotyped gender-name pairings and contrasts not present in all English dialects (more detail provided in the Appendix).7 Our implementation of these phenomena is often narrower than the linguistic definition because of the particular constraints described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The full set of human judgments and a summary of the results for all 67 paradigms is inTable 4in the Appendix.9 A few had to be excluded due to ineligible annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "GPT-2-XL performs slightly worse on BLiMP; see \u00a76.3. 11 https://github.com/nyu-mll/jiant/tree/ blimp-and-npi/scripts/blimp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/sheng-fu/colorless greenRNNs.13 https://github.com/kpu/kenlm. 14 https://github.com/anhad13/blimp ngram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The LSTM LM, which has word-level tokens, averages 5.2 points worse on the irregular paradigms. This effect is not due to morphology, but rather to the higher proportion of out-of-vocabulary items among the irregular nouns, which include many loanwords such as theses and alumni.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Average perplexity on the Gulordava et al. (2019) test set: 595 at 0.125M, 212 at 1M, 92.8 at 8M, and 53 at 64M.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based upon work supported by the National Science Foundation under grant no. 1850208. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This project has also benefited from support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), by Samsung Research (under the project Improving Deep Learning using Latent Structure), by Intuit, Inc., and by NVIDIA Corporation (with the donation of a Titan V GPU).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The following contains examples from each of the 67 paradigms in BLiMP.Caveats Some paradigms include non-transparent factors that may influence interpretation. We list here those factors that we are aware of:\u2022 Several paradigms within ANAPHOR AGREE-MENT and BINDING rely on stereotyped gender assignment associated with names (e.g., Mary). A model has to have at least a weak gender-name association in order to succeed on some paradigms in BLiMP. For example, we mark sentences like Mary hugged themselves and Mary hugged himself as unacceptable, and we never include possibilities like Mary hugged themself.\u2022 To isolate certain phenomena, we had to rely on acceptability contrasts present in mainstream US and UK English but absent in many other dialects. For example, some speakers would accept the sentence Suzy don't lie, but we would mark this unacceptable based on mainstream US English judgments. BLiMP assesses models' knowledge of this specific dialect of English; in some cases it could penalize models that conform to a different dialect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
},
{
"text": "\u2022 Phenomenon refers to the linguistic phenomenon as noted in Table 2 . UID refers to the unique identifier used in the released dataset.\u2022 Model and human performance are reported as percent accuracy. 'Human' uses the more conservative individual judgments (as opposed to majority vote, for which each paradigm would be either 100% or 80%).\u2022 Each pair is marked for whether it is usable with a prefix method. All sentences are valid for the simple LM method.\u2022 If a sentence has a checkmark ( ) under the 1pfx column, the sentence can be used with the 1-prefix method in addition to the ",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "How to read this table:",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Regina wanted it to be obvious that Maria thought about Anna. Regina forced it to be obvious that Maria thought about Anna",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina wanted it to be obvious that Maria thought about Anna. Regina forced it to be obvious that Maria thought about Anna.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Julia wasn't fun to talk to. Julia wasn't unlikely to talk to",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia wasn't fun to talk to. Julia wasn't unlikely to talk to.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "This person shouldn't criticize this upset child. This person shouldn't criticize this upset children",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "This person shouldn't criticize this upset child. This person shouldn't criticize this upset children.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Brad passed one big museum and Eva passed several. Brad passed one museum and Eva passed several big",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brad passed one big museum and Eva passed several. Brad passed one museum and Eva passed several big.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Curtis's boss discussed four sons and Andrew discussed five sick sons",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Curtis's boss discussed four sons and Andrew discussed five sick sons.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Curtis's boss discussed four happy sons and Andrew discussed five sick",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Curtis's boss discussed four happy sons and Andrew discussed five sick.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Joel discovered the vase that Patricia might take",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel discovered the vase that Patricia might take. Joel discovered what Patricia might take the vase.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cheryl thought about some dog that upset Sandra. Cheryl thought about who some dog upset Sandra",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheryl thought about some dog that upset Sandra. Cheryl thought about who some dog upset Sandra.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bruce knows that person that Dawn likes that argued about a lot of guys. Bruce knows who that person that Dawn likes argued about a lot of guys",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce knows that person that Dawn likes that argued about a lot of guys. Bruce knows who that person that Dawn likes argued about a lot of guys.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Danielle finds out that many organizations have alarmed Chad. Danielle finds out who many organizations have alarmed Chad",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danielle finds out that many organizations have alarmed Chad. Danielle finds out who many organizations have alarmed Chad.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Christina forgot that all plays that win worry Dana. Christina forgot who all plays that win worry Dana",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christina forgot that all plays that win worry Dana. Christina forgot who all plays that win worry Dana.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nina has learned who most men sound like. Nina has learned that most men sound like",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina has learned who most men sound like. Nina has learned that most men sound like.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Martin did find out what every cashier that shouldn't drink wore. Martin did find out that every cashier that shouldn't drink wore. References Marantz, Alec. 2013. Verbal argument structure: Events and participants",
"authors": [],
"year": null,
"venue": "Lingua",
"volume": "30",
"issue": "",
"pages": "152--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin did find out what every cashier that shouldn't drink wore. Martin did find out that every cashier that shouldn't drink wore. References Marantz, Alec. 2013. Verbal argument structure: Events and participants. Lingua, 30:152-168. Elsevier.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Core Syntax: A Minimalist Approach",
"authors": [
{
"first": "David",
"middle": [],
"last": "Adger",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Adger. 2003. Core Syntax: A Minimalist Approach. Oxford University Press Oxford.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Finegrained analysis of sentence embeddings using auxiliary prediction tasks",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Adi",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Kermany",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Lavi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR Conference Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine- grained analysis of sentence embeddings using auxiliary prediction tasks. In Proceedings of ICLR Conference Track. Toulon, France.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Representation of constituents in neural language models: Coordination phrase as a case study",
"authors": [
{
"first": "Aixiu",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Peng Qian",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.04625"
]
},
"num": null,
"urls": [],
"raw_text": "Aixiu An, Peng Qian, Ethan Wilcox, and Roger Levy. 2019. Representation of constituents in neural language models: Coordination phrase as a case study. arXiv preprint arXiv:1909.04625.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Broken agreement",
"authors": [
{
"first": "Kathryn",
"middle": [],
"last": "Bock",
"suffix": ""
},
{
"first": "Carol",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1991,
"venue": "Cognitive Psychology",
"volume": "23",
"issue": "1",
"pages": "45--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathryn Bock and Carol A. Miller. 1991. Broken agreement. Cognitive Psychology, 23(1):45-93.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "What don't RNN language models learn about filler-gap dependencies?",
"authors": [
{
"first": "P",
"middle": [],
"last": "Rui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chaves",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Meeting of the Society for Computation in Linguistics (SCiL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui P. Chaves. 2020. What don't RNN language models learn about filler-gap dependencies? In Proceedings of the Third Meeting of the Society for Computation in Linguistics (SCiL).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "F",
"middle": [],
"last": "Stanley",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "Computer Speech & Language",
"volume": "13",
"issue": "4",
"pages": "359--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanley F. Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359-394.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Logic in Grammar",
"authors": [
{
"first": "Gennaro",
"middle": [],
"last": "Chierchia",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gennaro Chierchia. 2013. Logic in Grammar. Oxford University Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Aspects of the Theory of Syntax",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Lectures on Government and Binding",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1981. Lectures on Government and Binding.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "RNN simulations of grammaticality judgments on long-distance dependencies",
"authors": [
{
"first": "Absar",
"middle": [],
"last": "Shammur",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "133--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammaticality judgments on long-distance dependencies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 133-144.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "An LSTM adaptation study of (un) grammaticality",
"authors": [
{
"first": "Absar",
"middle": [],
"last": "Shammur",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "204--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shammur Absar Chowdhury and Roberto Zamparelli. 2019. An LSTM adaptation study of (un) grammaticality. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 204-212.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "What you can cram into a single &!#* vector: Probing sentence embeddings for linguistic properties",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL 2018-56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2126--2136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, German Kruszewski, Guillaume Lample, Lo\u00efc Barrault, and Marco Baroni. 2018. What you can cram into a single &!#* vector: Probing sentence embeddings for linguistic properties. In ACL 2018-56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 2126-2136.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Assessing the ability of transformerbased neural models to represent structurally unbounded dependencies",
"authors": [
{
"first": "Jillian",
"middle": [
"K"
],
"last": "Da Costa",
"suffix": ""
},
{
"first": "Rui",
"middle": [
"P"
],
"last": "Chaves",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Meeting of the Society for Computation in Linguistics (SCiL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jillian K. Da Costa and Rui P. Chaves. 2020. Assessing the ability of transformer- based neural models to represent structurally unbounded dependencies. In Proceedings of the Third Meeting of the Society for Computation in Linguistics (SCiL).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Transformer-XL: Attentive language models beyond a fixed-length context",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2978--2988",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988. Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Assessing composition in sentence vector representations",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1790--1801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceed- ings of the 27th International Conference on Computational Linguistics, pages 1790-1801. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.01329"
]
},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency. arXiv preprint arXiv:1809.01329.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "At least'",
"authors": [
{
"first": "Bart",
"middle": [],
"last": "Geurts",
"suffix": ""
},
{
"first": "Rick",
"middle": [],
"last": "Nouwen",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "The semantics of scalar modifiers. Language",
"volume": "",
"issue": "",
"pages": "533--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bart Geurts and Rick Nouwen. 2007. 'At least' et al.: The semantics of scalar modifiers. Language, pages 533-559.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "English gigaword. Linguistic Data Consortium",
"authors": [
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia, 4(1):34.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Colorless green recurrent networks dream hierarchically",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Gulordava",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Society for Computation in Linguistics",
"volume": "2",
"issue": "",
"pages": "363--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2019. Colorless green recurrent networks dream hierarchically. Proceedings of the Society for Computation in Linguistics, 2(1):363-364.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Scalable modified Kneser-Ney language model estimation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Pouzyrevsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "690--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable mod- ified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 690-696.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Predicting grammaticality on an ordinal scale",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Aoife",
"middle": [],
"last": "Cahill",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Melissa",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mulholland",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "174--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman, Aoife Cahill, Nitin Madnani, Melissa Lopez, Matthew Mulholland, and Joel Tetreault. 2014. Predicting grammaticality on an ordinal scale. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 174-180.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "328--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Do language models understand anything? On the ability of LSTMs to understand negative polarity items",
"authors": [
{
"first": "Jaap",
"middle": [],
"last": "Jumelet",
"suffix": ""
},
{
"first": "Dieuwke",
"middle": [],
"last": "Hupkes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "222--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaap Jumelet and Dieuwke Hupkes. 2018. Do language models understand anything? On the ability of LSTMs to understand negative polarity items. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222-231.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Verb argument structure alternations in word and sentence embeddings",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Society for Computation in Linguistics",
"volume": "2",
"issue": "",
"pages": "287--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Alex Warstadt, Adina Williams, and Samuel R. Bowman. 2019. Verb argument structure alternations in word and sentence embeddings. Proceedings of the Society for Computation in Linguistics, 2(1):287-297.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Investigating variation in island effects",
"authors": [
{
"first": "Dave",
"middle": [],
"last": "Kush",
"suffix": ""
},
{
"first": "Terje",
"middle": [],
"last": "Lohndal",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Sprouse",
"suffix": ""
}
],
"year": 2018,
"venue": "Natural Language & Linguistic Theory",
"volume": "36",
"issue": "3",
"pages": "743--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dave Kush, Terje Lohndal, and Jon Sprouse. 2018. Investigating variation in island effects. Natural Language & Linguistic Theory, 36(3):743-779.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Shalom",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognitive Science",
"volume": "41",
"issue": "5",
"pages": "1202--1241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and proba- bility: A probabilistic view of linguistic knowl- edge. Cognitive Science, 41(5):1202-1241.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Trans- actions of the Association for Computational Linguistics, 4:521-535.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Targeted syntactic evaluation of language models",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Marvin",
"suffix": ""
},
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1192--1202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. CoRR, abs/1609.07843.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan\u010dernock\u1ef3",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "Eleventh Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Jan\u010cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contex- tualized word representations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Improving language understanding with unsupervised learning",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning, Technical report, OpenAI.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Exploring the limits of transfer learning with a unified text-to",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Uncovering the richness of the stimulus: Structure dependence and indirect statistical evidence",
"authors": [
{
"first": "Florencia",
"middle": [],
"last": "Reali",
"suffix": ""
},
{
"first": "Morten",
"middle": [
"H"
],
"last": "Christiansen",
"suffix": ""
}
],
"year": 2005,
"venue": "Cognitive Science",
"volume": "29",
"issue": "6",
"pages": "1007--1028",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florencia Reali and Morten H. Christiansen. 2005. Uncovering the richness of the stimulus: Structure dependence and indirect statistical evidence. Cognitive Science, 29(6):1007-1028.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Syntactic Theory: A Formal Introduction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ivan",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Sag",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Wasow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bender",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan A. Sag, Thomas Wasow, and Emily M. Bender. 2003. Syntactic Theory: A Formal Introduction, 2nd edition. CSLI Publications.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology",
"authors": [
{
"first": "Carson",
"middle": [
"T"
],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carson T. Sch\u00fctze. 1996. The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology. University of Chicago Press.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Does string-based neural MT learn source syntax?",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Inkit",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1526--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526-1534.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "An Introduction to Syntactic Analysis and Theory",
"authors": [
{
"first": "Dominique",
"middle": [],
"last": "Sportiche",
"suffix": ""
},
{
"first": "Hilda",
"middle": [],
"last": "Koopman",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Stabler",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominique Sportiche, Hilda Koopman, and Edward Stabler. 2013. An Introduction to Syntactic Analysis and Theory, John Wiley & Sons.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "What do you learn from context? Probing for sentence structure in contextualized word representations",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "R",
"middle": [
"Thomas"
],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In Proceedings of ICLR.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": ";",
"middle": [
"I"
],
"last": "Illia Polosukhin",
"suffix": ""
},
{
"first": "U",
"middle": [
"V"
],
"last": "Guyon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Luxburg",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Fergus",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "SuperGLUE: A stickier benchmark for generalpurpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "33rd Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general- purpose language understanding systems. In 33rd Conference on Neural Information Processing Systems.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "2019b. jiant 1.2: A software toolkit for research on general-purpose text understanding models",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"F"
],
"last": "Tenney",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Katherin",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hula",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Raghu",
"middle": [],
"last": "Pappagari",
"suffix": ""
},
{
"first": "R",
"middle": [
"Thomas"
],
"last": "Shuning Jin",
"suffix": ""
},
{
"first": "Roma",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Yinghui",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Phang",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Ian F. Tenney, Yada Pruksachatkun, Katherin Yu, Jan Hula, Patrick Xia, Raghu Pappagari, Shuning Jin, R. Thomas McCoy, Roma Patel, Yinghui Huang, Jason Phang, Edouard Grave, Haokun Liu, Najoung Kim, Phu Mon Htut, Thibault F'evry, Berlin Chen, Nikita Nangia, Anhad Mohananey, Katharina Kann, Shikha Bordia, Nicolas Patry, David Benton, Ellie Pavlick, and Samuel R. Bowman. 2019b. jiant 1.2: A software toolkit for research on general-purpose text understanding models. http://jiant.info/.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Definiteness and the English existential. Language",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Betty",
"middle": [],
"last": "Birner",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "722--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Ward and Betty Birner. 1995. Definite- ness and the English existential. Language, pages 722-742.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Linguistic analysis of pretrained sentence encoders with acceptability judgments",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.03438"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt and Samuel R. Bowman. 2019. Lin- guistic analysis of pretrained sentence encoders with acceptability judgments. arXiv preprint arXiv:1901.03438.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Investigating BERT's knowledge of language: Five analysis methods with NPIs",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Ioana",
"middle": [],
"last": "Grosu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Hagen",
"middle": [],
"last": "Blix",
"suffix": ""
},
{
"first": "Yining",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Alsop",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Haokun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Alicia",
"middle": [],
"last": "Parrish",
"suffix": ""
},
{
"first": "Sheng-Fu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "2870--2880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng- Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jereti\u010d, and Samuel R. Bowman. 2019a. Investigating BERT's knowledge of language: Five analysis methods with NPIs. In Proceedings of EMNLP-IJCNLP, pages 2870-2880.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Neural network acceptability judgments",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "625--641",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019b. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "What do RNN language models learn about filler-gap dependencies?",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "211--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about filler-gap dependencies? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 211-221.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Structural supervision improves learning of non-local grammatical dependencies",
"authors": [
{
"first": "Ethan",
"middle": [],
"last": "Wilcox",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3302--3312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, and Roger Levy. 2019. Structural supervision improves learning of non-local grammatical dependencies. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3302-3312.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Heatmap showing the correlation between models' accuracies in each of the 67 paradigms."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Models' performance on BLiMP as a function of sentence length, perplexity, log probability of the acceptable sentence, and model confidence (calculated as |log P (S 1 ) \u2212 log P (S 2 )|)."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "The effect of the locality of determiner-noun agreement (upper panel) and the type of agreement attractor (lower panel) on model performance."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Transformer-XL (top) and LSTM LM (bottom) performance as a function of training size and phenomena in BLiMP. The gray line shows the average across all phenomena."
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Comparison of models' performance on the simple LM method and the 1-and 2-prefix methods. The upper panels show results from three phenomena that are compatible with both 1-prefix and 2-prefix methods. The lower panel shows the averages and standard deviations across all phenomena."
},
"TABREF0": {
"html": null,
"type_str": "table",
"text": "Summary of related work organized by linguistic phenomena tested. All studies analyze neural networks using acceptability judgments on minimal pairs mainly in English. Some studies appear multiple times.",
"content": "
",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "There was bound to be a fish escaping. There was unable to be a fish escaping.",
"content": ", or by keeping the noun the same |
and changing the determiner (e.g., Rachelle had |
bought those chair.). With completeness in mind, |
we include such complementary paradigms in |
BLiMP whenever possible. |
",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"text": "Minimal pairs from each of the twelve linguistic phenomenon categories covered by BLiMP.",
"content": "",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "4M, and 1/8M tokens. For each size, we train the model on five different random samples of the original training data, which has a size of 83M tokens.12",
"content": "sizes: 64M, 32M, 16M, 8M, 4M, 2M, 1M, 1/2M, |
1/ |
). Our main |
experiments use GPT-2-large with 36 layers and |
774M parameters. LSTM We include a long-short term memory |
(LSTM, Hochreiter and Schmidhuber, 1997) |
LM in our experiments. Specifically, we test |
a pretrained LSTM LM from Gulordava et al. |
(2019) on BLiMP. The model is trained on a 83M- |
token corpus extracted from English Wikipedia. |
To investigate the effect of training size on |
model performance ( \u00a76.3), we retrain a series |
of LSTM and Transformer-XL models with the |
same hyperparameters and the following training |
",
"num": null
},
"TABREF6": {
"html": null,
"type_str": "table",
"text": "Those turtles that are boring April could not ever break those couches. Those turtles that are not boring April could ever break those couches.",
"content": "IRREGULAR | irregular past participle adjectives | 79 | 93 | 91 | 78 | 99 | The forgotten newspaper article was bad. | The forgot newspaper article was bad. |
FORMS | irregular past participle verbs | 80 | 85 | 66 | 90 | 95 | Edward hid the cats. | Edward hidden the cats. |
| adjunct island | 48 | 67 | 65 | 91 | 94 | Who has Colleen aggravated before kissing Judy? | Who has Colleen aggravated Judy before kissing? |
| complex NP island | 50 | 47 | 58 | 72 | 80 | Who hadn't some driver who would fire Jennifer's colleague embarrassed? | Who hadn't Jennifer's colleague embarrassed some driver who would fire? |
| coordinate structure constraint complex left branch | 32 | 30 | 36 | 42 | 90 | What lights could Spain sell and Andrea discover? | What could Spain sell lights and Andrea discover? |
ISLAND | coordinate structure constraint object extraction | 59 | 71 | 74 | 88 | 91 | Who will Elizabeth and Gregory cure? | Who will Elizabeth cure and Gregory? |
EFFECTS | left branch island echo question | 96 | 32 | 63 | 77 | 91 | David would cure what snake? | What would David cure snake? |
| left branch island simple question | 57 | 36 | 36 | 82 | 99 | Whose hat should Tonya wear? | Whose should Tonya wear hat? |
| sentential subject island | 61 | 43 | 37 | 35 | 61 | Who have many women's touring Spain embarrassed. | Who have many women's touring embarrassed Spain. |
| wh island | 56 | 47 | 20 | 77 | 73 | What could Alan discover he has run around? | What could Alan discover who has run around? |
| matrix question npi licensor present | 1 | 2 | 1 | 67 | 98 | Should Monica ever grin? | Monica should ever grin. |
| npi present 1 | 47 | 54 | 61 | 55 | 83 | Even these trucks have often slowed. | Even these trucks have ever slowed. |
NPI LICENSING | npi present 2 only npi licensor present only npi scope | 47 57 30 | 54 93 36 | 48 80 45 | 62 100 85 | 98 92 72 | Many skateboards also roll. Only Bill would ever complain. Only those doctors who Karla respects ever conceal many snakes. | Many skateboards ever roll. Even Bill would ever complain. Those doctors who only Karla respects ever conceal many snakes. |
| sentential negation npi licensor present | 93 | 100 | 99 | 89 | 93 | Those banks had not ever lied. | Those banks had really ever lied. |
| sentential negation npi scope | 45 | 23 | 53 | 95 | 81 | | |
| existential there quantifiers 1 | 91 | 96 | 94 | 99 | 94 | There aren't many lights darkening. | There aren't all lights darkening. |
QUANTIFIERS | existential there quantifiers 2 superlative quantifiers 1 | 62 45 | 16 63 | 14 84 | 24 84 | 76 91 | Each book is there disturbing Margaret. No man has revealed more than five forks. | There is each book disturbing Margaret. No man has revealed at least five forks. |
| superlative quantifiers 2 | 17 | 83 | 85 | 78 | 85 | An actor arrived at at most six lakes. | No actor arrived at at most six lakes. |
| distractor agreement relational noun | 24 | 76 | 77 | 83 | 81 | A sketch of lights doesn't appear. | A sketch of lights don't appear. |
SUBJECT-VERB AGR. | distractor agreement relative clause irregular plural subject verb agreement 1 irregular plural subject verb agreement 2 regular plural subject verb agreement 1 | 22 73 88 76 | 63 81 89 89 | 60 78 83 73 | 68 95 96 97 | 86 95 94 95 | Boys that aren't disturbing Natalie suffer. This goose isn't bothering Edward. The woman cleans every public park. Jeffrey hasn't criticized Donald. | Boys that aren't disturbing Natalie suffers. This goose weren't bothering Edward. The women cleans every public park. Jeffrey haven't criticized Donald. |
| regular plural subject verb agreement 2 | 81 | 83 | 85 | 96 | 95 | The dress crumples. | The dresses crumples. |
",
"num": null
},
"TABREF7": {
"html": null,
"type_str": "table",
"text": "Examples of all 67 paradigms in BLiMP along with model performance and estimated human agreement.",
"content": "",
"num": null
}
}
}
}