{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:38:54.504910Z" }, "title": "BLiMP : A Benchmark of Linguistic Minimal Pairs for English", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "", "affiliation": {}, "email": "warstadt@nyu.edu" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Anhad", "middle": [], "last": "Mohananey", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Sheng-Fu", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Bowman", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "We introduce BLiMP (The Benchmark of Linguistic Minimal Pairs, or ), a large new benchmark dataset for the targeted evaluation of statistical language models' knowledge of linguistic phenomena. The benchmark consists of 67 datasets, each containing 1000 minimal pairs isolating a specific grammatical contrast and collectively offering broad coverage of major phenomena in English grammar. Like the GLUE benchmark for reusable sentence understanding models (Wang et al., 2018) , assigns a single numerical score to a language model (LM) measuring its overall mastery of grammar, enabling straightforward comparison of LMs. The dataset is ideal for fine grained analysis of an LM's knowledge of different grammatical domains. For baselines, we evaluate four representative LMs from NLP literature. We find that is hard even for state-of-the-art models, though Transformers perform better than LSTM and ngram LMs. Humans overwhelmingly agree with the generated minimal pair contrasts in . A growing body of work evaluates LSTM LMs' knowledge of grammar by testing whether they prefer acceptable sentences over minimally different unacceptable ones (Linzen et al., 2016, a.o.) . So far, results have been mixed, motivating the creation of this benchmark which scales up this kind of investigation to isolate dozens of grammatical contrasts within an otherwise-uniform controlled artificial dataset. Our results show that knowledge of grammar has increased as LM technology progressed from n-grams to LSTMs to Transformers. LSTMs and Transformers alike are very accurate in detecting morphological and agreement violations, but state-of-the-art Transformer LMs have an especially large advantage over LSTMs in contrasts where simple generalizations are difficult to find, such as NPI licensing and island effects.", "cite_spans": [ { "start": 457, "end": 476, "text": "(Wang et al., 2018)", "ref_id": null }, { "start": 1146, "end": 1173, "text": "(Linzen et al., 2016, a.o.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction & Prior Work", "sec_num": null }, { "text": "consists of 67 datasets of 1000 minimal pairs each, grouped into twelve broader categories (Table 1) . A minimal pair consists of two minimally different sentences where one is grammatically acceptable and the other is not. All minimal pairs in contain the same number of tokens and differ only in word order or the identity of one lexical item, following Marvin and Linzen (2018) .", "cite_spans": [ { "start": 356, "end": 380, "text": "Marvin and Linzen (2018)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 91, "end": 100, "text": "(Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "We include minimal pairs illustrating linguistic phenomena well known in morphology, syntax, and semantics. While this set is not exhaustive, it does cover a wide range of topics found in formal implementations of English grammar (e.g., HPSG; generative linguistics textbooks). To fully isolate the phenomena of interest, we use realistic artificially-generated sentences, following Marvin and Linzen, a.o. To generate text, we construct a vocabulary of over 3300 lexical items labeled with features reflecting morphology (e.g. singular/plural), syntax (e.g. transitive/intransitive), and semantics (e.g. animate/inanimate), and build a simple artificial grammar for each paradigm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "We validate the acceptability contrasts in the generated pairs with Mechanical Turk annotators, testing 5 randomly-selected pairs from each paradigm using the same forced-choice task models are presented with. Majority vote of 20 annotators agrees with on at least 4/5 examples from each paradigm and on 96.4% of pairs overall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "We evaluate 4 baselines: (1) An ngram LM trained on the English Gigaword corpus (Graff et al., 2003) , based on a modified Kneser Ney implementation by (Heafield, 2011) , which considers up to 5-grams, restricting the model from learning dependencies spanning more than 5 words. (2) An LSTM recurrent neural network LM from Gulordava et al. (2018) . 3Transformer-XL (Dai et al., 2019) Table 2 : Percentage accuracy of four baseline models and raw human performance on using a forcedchoice task. A random guessing baseline would give expected accuracy of 50%.", "cite_spans": [ { "start": 80, "end": 100, "text": "(Graff et al., 2003)", "ref_id": null }, { "start": 152, "end": 168, "text": "(Heafield, 2011)", "ref_id": null }, { "start": 324, "end": 347, "text": "Gulordava et al. (2018)", "ref_id": "BIBREF2" }, { "start": 366, "end": 384, "text": "(Dai et al., 2019)", "ref_id": null } ], "ref_spans": [ { "start": 385, "end": 392, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Baselines", "sec_num": null }, { "text": "long contiguous inputs of thousands of words during training. (4) GPT-2 (Radford et al., 2019) , a larger neural network LM based on a standard architecture, which is not recurrent and directly models long-distance dependencies.", "cite_spans": [ { "start": 72, "end": 94, "text": "(Radford et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": null }, { "text": "Our primary evaluation is a forced choice task, in which we test whether a model assigns a higher probability to the acceptable sentence than unacceptable one in each pair. While probability may not correspond to grammaticality when comparing very different sentences, we expect this to be a viable proxy when comparing minimally different sentences as in our data. Additional metrics using word-level probabilities to more narrowly isolate model behavior yield broadly similar conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": null }, { "text": "We report model accuracy for the 12 broad categories (Table 2) . Overall, the state-of-the-art GPT-2 achieves the highest score and the n-gram the lowest, though all models perform significantly below humans. We find that some phenomena are easier than others: determiner-noun agreement is easy for all models, while islands are quite difficult. We replicate Marvin and Linzen's finding that LSTMs succeed at subject-verb agreement and to some extent binding/anaphora, but largely fail at NPI licensing.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 62, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Results & Discussion", "sec_num": null }, { "text": "The n-gram model's poor overall performance confirms is not solvable from co-occurrence information alone. Rather, success at is driven by the more abstract (and less interpretable) features learned by neural networks. There are a few exceptions to this pattern: n-grams are mostly sufficient to capture irregular verb forms. Furthermore, SoTA models still show little improvement over n-grams on some phenomena, such as quantifier restrictions and, most strikingly, island effects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results & Discussion", "sec_num": null }, { "text": "We have offered a humansolvable challenge set that covers a broad overview of major grammatical phenomena in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null }, { "text": "is hard even for SotA models, though recent large-scale Transformers outperform simple baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF2": { "ref_id": "b2", "title": "Colorless green recurrent networks dream hierarchically", "authors": [ { "first": "K", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "T", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "M", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Gulordava, P. Bojanowski, E. Grave, T. Linzen, and M. Baroni. 2018. Colorless green recurrent networks dream hierarchically.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Assessing the ability of LSTMs to learn syntax-sensitive dependencies", "authors": [ { "first": "T", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "E", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Y", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Linzen, E. Dupoux, and Y. Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "R", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "T", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Marvin and T. Linzen. 2018. Targeted syntactic evaluation of language models.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "A", "middle": [], "last": "Radford", "suffix": "" }, { "first": "J", "middle": [], "last": "Wu", "suffix": "" }, { "first": "R", "middle": [], "last": "Child", "suffix": "" }, { "first": "D", "middle": [], "last": "Luan", "suffix": "" }, { "first": "D", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "content": "
modelO v e r a l lA n a . A g rA r g . S t rB i n d i n gC t r l . R a i s . D -N A g rE l l i p s i sF i l l e r . G a p I r r e g u l a rI s l a n dN P IQ u a n t i fi e r s S -V A g r
5-gram60.547.971.964.468.570.036.958.179.553.745.553.560.3
LSTM70.895.273.573.267.984.267.371.392.343.966.762.285.1
Transf.-XL68.794.169.574.771.583.077.264.978.245.855.269.376.0
GPT-280.199.678.380.180.593.386.679.084.163.178.971.389.0
Human88.697.590.087.383.992.285.086.997.084.988.186.690.9
", "type_str": "table", "num": null, "text": "Minimal pairs exemplifying each of the twelve linguistic phenomenon categories covered by . N is the number of 1000-example minimal pair paradigms within each category." } } } }