{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:28:33.041219Z" }, "title": "Recurrent babbling: evaluating the acquisition of grammar from limited input data", "authors": [ { "first": "Ludovica", "middle": [], "last": "Pannitto", "suffix": "", "affiliation": { "laboratory": "", "institution": "CIMeC University of Trento", "location": {} }, "email": "ludovica.pannitto@unitn.it" }, { "first": "Aur\u00e9lie", "middle": [], "last": "Herbelot", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": {} }, "email": "aurelie.herbelot@unitn.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recurrent Neural Networks (RNNs) have been shown to capture various aspects of syntax from raw linguistic input. In most previous experiments, however, learning happens over unrealistic corpora, which do not reflect the type and amount of data a child would be exposed to. This paper remedies this state of affairs by training a Long Short-Term Memory network (LSTM) over a realistically sized subset of child-directed input. The behaviour of the network is analysed over time using a novel methodology which consists in quantifying the level of grammatical abstraction in the model's generated output (its 'babbling'), compared to the language it has been exposed to. We show that the LSTM indeed abstracts new structures as learning proceeds.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Recurrent Neural Networks (RNNs) have been shown to capture various aspects of syntax from raw linguistic input. In most previous experiments, however, learning happens over unrealistic corpora, which do not reflect the type and amount of data a child would be exposed to. This paper remedies this state of affairs by training a Long Short-Term Memory network (LSTM) over a realistically sized subset of child-directed input. The behaviour of the network is analysed over time using a novel methodology which consists in quantifying the level of grammatical abstraction in the model's generated output (its 'babbling'), compared to the language it has been exposed to. We show that the LSTM indeed abstracts new structures as learning proceeds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Artificial Neural Networks, and Long Short-Term Memory Networks more specifically, have consistently demonstrated great capabilities in the area of language modeling. In addition to generating credible surface patterns, they show excellent performances when tested on very specific grammatical abilities (Gulordava et al., 2018; Lakretz et al., 2019) , without requiring any prior bias towards the syntactic structure of natural languages.", "cite_spans": [ { "start": 304, "end": 328, "text": "(Gulordava et al., 2018;", "ref_id": "BIBREF28" }, { "start": 329, "end": 350, "text": "Lakretz et al., 2019)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Do RNNs learn grammar?", "sec_num": "1" }, { "text": "From a theoretical point of view, these results seem to contradict the well-known argument of the poverty of the stimulus (Chomksy, 1959; Chomsky, 1968) and raise questions about the continuity hypothesis in language acquisition (Lust, 1999; Crain and Pietroski, 2001) . At the same time, a number of results give a much more mitigated view of RNNs' abstraction capabilities (Marvin and Linzen, 2018; Chowdhury and Zamparelli, 2018) . It thus remains unclear how and to what extent grammatical abilities emerge in artificial language models, and how this knowledge is encoded in their repre-sentations -especially when considering notions such as productivity and compositionality (Baroni, 2020) , which are recognised as defining traits of natural languages.", "cite_spans": [ { "start": 122, "end": 137, "text": "(Chomksy, 1959;", "ref_id": "BIBREF6" }, { "start": 138, "end": 152, "text": "Chomsky, 1968)", "ref_id": "BIBREF7" }, { "start": 229, "end": 241, "text": "(Lust, 1999;", "ref_id": "BIBREF49" }, { "start": 242, "end": 268, "text": "Crain and Pietroski, 2001)", "ref_id": "BIBREF14" }, { "start": 375, "end": 400, "text": "(Marvin and Linzen, 2018;", "ref_id": "BIBREF51" }, { "start": 401, "end": 432, "text": "Chowdhury and Zamparelli, 2018)", "ref_id": "BIBREF10" }, { "start": 681, "end": 695, "text": "(Baroni, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Do RNNs learn grammar?", "sec_num": "1" }, { "text": "This paper proposes that the evaluation of RNN grammars should be widened to include the effect of the type of input data fed to the network, as well as the theoretical paradigm used to analyse its output. We specifically remark that much of the discussion concerning language modeling remains influenced by the mainstream generativist approach, which posits a sharp distinction between syntax and the lexicon. Our own approach will be to depart from this account by testing the grammatical abilities of an RNN in a usage-based perspective. Specifically, we ask what kind of structures are abstracted and used productively by the network, and how the abstraction process takes place over time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do RNNs learn grammar?", "sec_num": "1" }, { "text": "In contrast with previous models: (i) we train a vanilla char-LSTM on a more realistic variety and amount of data, focusing on a limited amount of child-directed language; (ii) we do not rely on extrinsic evaluations or downstream tasks, instead we introduce a methodology to evaluate how the distribution of grammatical items, over time, comes to approximate the one in the input, through a continuous process and (iii) we tentatively explore the interaction between meaning representations and the abstraction abilities of the network, blurring the distinction between lexicon and syntax, in a way more akin to Construction Grammar (CxG, Fillmore, 1988; Goldberg, 1995; Kay and Fillmore, 1999) . Our evaluation focuses on the network's generated output (its 'babbling'), asking to what extent the system simulates the type of grammatical abstraction observed in human children. The study is conducted on English.", "cite_spans": [ { "start": 634, "end": 655, "text": "(CxG, Fillmore, 1988;", "ref_id": null }, { "start": 656, "end": 671, "text": "Goldberg, 1995;", "ref_id": "BIBREF25" }, { "start": 672, "end": 695, "text": "Kay and Fillmore, 1999)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Do RNNs learn grammar?", "sec_num": "1" }, { "text": "In what follows, we review related work ( \u00a7 2), we then formulate the question of grammar modelling in a broader theoretical framework ( \u00a7 3) in-volving three parameters: the type of acquisition mechanism under study, the nature of the input data, and the representational paradigm adopted for the analysis. We configure this broad framework with particular choices of parameters and implement it in \u00a7 4, 5 and 6. We provide two analyses of the distributional properties of the network's 'babbling', discussed in \u00a7 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do RNNs learn grammar?", "sec_num": "1" }, { "text": "A considerable amount of literature has investigated the ability of ANNs to acquire grammar, and the list we present here is by no means exhaustive. The analysis of the syntactic abilities of LSTMs (Hochreiter and Schmidhuber, 1997) and ANN-based language models dates back quite a few years (McClelland, 1992; Lewis and Elman, 2001 ). Recent contributions have followed a general tendency to analyze the inner-workings of networks, and the specific type of knowledge they acquire (Alishahi et al., 2019; Linzen and Baroni, 2020) . For instance, Linzen et al. (2016) show how a network acquires abstract information about number agreement, albeit in a supervised setting. The same study is expanded in Gulordava et al. (2018) , which shows how a language modeling task is enough for a network to predict long-distance number agreement, both on semantically sound and nonsensical sentences. The authors conclude that \"LM-trained RNNs can construct abstract grammatical representations\", but their model is trained on a rather consequent amount of data (90M tokens) from a rather peculiar distribution (a Wikipedia snapshot). Similarly, it has been shown that LSTMs (McCoy et al., 2018; Wilcox et al., 2018) can learn tricky syntactic rules like the English auxiliary inversion and filler-gap dependencies, although, in later work, McCoy et al. (2020) find that only models with an explicit inductive bias (Shen et al., 2018) learn to generalize the MOVE-MAIN rule with respect to auxiliary inversion. Marvin and Linzen (2018) show instead poor performance of RNNs in grammaticality evaluation, due to their sensitivity to the specific lexical items encountered during training, a limitation that, they say, \"would not be expected if its syntactic representations were fully abstract\". Similarly Chowdhury and Zamparelli (2018) state that their model \"is sensitive to linguistic processing factors and probably ultimately unable to induce a more abstract notion of grammaticality\". Moreover, despite the fact that the model of Gulordava et al. (2018) is tested on four languages, the most promising results may not be generalizable to languages showing different surface patterns from English. Ravfogel et al. (2018) fail to replicate Gulordava et al. (2018) 's results on Basque, and Davis and van Schijndel (2020) , after testing the network on relative clause attachment cases in English and Spanish, conjecture that the associative (non-linguistic) bias of RNNs overlaps with English syntactic structure but represents an obstacle to learn attachment rules for Spanish.", "cite_spans": [ { "start": 198, "end": 232, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF33" }, { "start": 292, "end": 310, "text": "(McClelland, 1992;", "ref_id": "BIBREF54" }, { "start": 311, "end": 332, "text": "Lewis and Elman, 2001", "ref_id": "BIBREF43" }, { "start": 481, "end": 504, "text": "(Alishahi et al., 2019;", "ref_id": "BIBREF1" }, { "start": 505, "end": 529, "text": "Linzen and Baroni, 2020)", "ref_id": "BIBREF46" }, { "start": 546, "end": 566, "text": "Linzen et al. (2016)", "ref_id": "BIBREF47" }, { "start": 702, "end": 725, "text": "Gulordava et al. (2018)", "ref_id": "BIBREF28" }, { "start": 1164, "end": 1184, "text": "(McCoy et al., 2018;", "ref_id": "BIBREF55" }, { "start": 1185, "end": 1205, "text": "Wilcox et al., 2018)", "ref_id": "BIBREF72" }, { "start": 1404, "end": 1423, "text": "(Shen et al., 2018)", "ref_id": "BIBREF65" }, { "start": 1500, "end": 1524, "text": "Marvin and Linzen (2018)", "ref_id": "BIBREF51" }, { "start": 2025, "end": 2048, "text": "Gulordava et al. (2018)", "ref_id": "BIBREF28" }, { "start": 2192, "end": 2214, "text": "Ravfogel et al. (2018)", "ref_id": "BIBREF64" }, { "start": 2233, "end": 2256, "text": "Gulordava et al. (2018)", "ref_id": "BIBREF28" }, { "start": 2283, "end": 2313, "text": "Davis and van Schijndel (2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Other puzzling results concern the relation of perplexity to syntactic performance (Warstadt et al., 2019; Hu et al., 2020) : having evaluated their models on 34 benchmarks, Hu et al. (2020) conclude with a call for a wider variety of syntactic phenomena to test on. Further studies have shown that networks carrying explicit inductive bias perform better than vanilla LSTMs. In a recent paper, Lepori et al. (2020) show that a constituency-based network generalizes more robustly than a dependencybased one, and that both outperform a more basic BiLSTM. Lastly, we mention the study carried out by Kuncoro et al. (2018) who perform their study using a character-based LSTM -a choice we will follow in this work.", "cite_spans": [ { "start": 83, "end": 106, "text": "(Warstadt et al., 2019;", "ref_id": "BIBREF71" }, { "start": 107, "end": 123, "text": "Hu et al., 2020)", "ref_id": "BIBREF35" }, { "start": 174, "end": 190, "text": "Hu et al. (2020)", "ref_id": "BIBREF35" }, { "start": 395, "end": 415, "text": "Lepori et al. (2020)", "ref_id": "BIBREF42" }, { "start": 599, "end": 620, "text": "Kuncoro et al. (2018)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A very similar scientific discussion, which we won't report in depth here, is blooming around Transformer-based language models (Tran et al., 2018; Goldberg, 2019; Bacon and Regier, 2019; Jawahar et al., 2019; Lin et al., 2019) , leading to similar contrasting results.", "cite_spans": [ { "start": 128, "end": 147, "text": "(Tran et al., 2018;", "ref_id": "BIBREF70" }, { "start": 148, "end": 163, "text": "Goldberg, 2019;", "ref_id": "BIBREF27" }, { "start": 164, "end": 187, "text": "Bacon and Regier, 2019;", "ref_id": "BIBREF2" }, { "start": 188, "end": 209, "text": "Jawahar et al., 2019;", "ref_id": "BIBREF36" }, { "start": 210, "end": 227, "text": "Lin et al., 2019)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Finally, a separate line of work focuses on a more indirect test of the information encoded in the internal representation, assessing which aspects of the original syntactic structure can be reconstructed through diagnostic classifiers (Adi et al., 2017; Giulianelli et al., 2018; Hewitt and Manning, 2019; Tenney et al., 2019) .", "cite_spans": [ { "start": 236, "end": 254, "text": "(Adi et al., 2017;", "ref_id": "BIBREF0" }, { "start": 255, "end": 280, "text": "Giulianelli et al., 2018;", "ref_id": "BIBREF23" }, { "start": 281, "end": 306, "text": "Hewitt and Manning, 2019;", "ref_id": "BIBREF32" }, { "start": 307, "end": 327, "text": "Tenney et al., 2019)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In summary, a clear trend has not yet emerged (Linzen and Baroni, 2020) . All the models we cited, however, seem to idealize syntactic structure as a separate and more abstract ability from the knowledge of statistical regularities or lexical co-occurrences. This perspective may reflect a belief in a sharp distinction between the lexicon and compositional rules. That is, ANNs are expected to gain abstract grammatical abilities through compositional generalization, where compositionality is understood as the ability to produce an unbounded number of sentences by means of a set of algebraic rules (Baroni, 2020) . In contrast with this approach, usage-based models encourage us to adopt a different perspective, and to analyze LSTMs' grammatical abilities with respect to the kind of representations (more in \u00a73.3) posited by theories such as Construction Grammar (CxG, Fillmore, 1988; Goldberg, 1995; Kay and Fillmore, 1999) .", "cite_spans": [ { "start": 46, "end": 71, "text": "(Linzen and Baroni, 2020)", "ref_id": "BIBREF46" }, { "start": 602, "end": 616, "text": "(Baroni, 2020)", "ref_id": "BIBREF4" }, { "start": 869, "end": 890, "text": "(CxG, Fillmore, 1988;", "ref_id": null }, { "start": 891, "end": 906, "text": "Goldberg, 1995;", "ref_id": "BIBREF25" }, { "start": 907, "end": 930, "text": "Kay and Fillmore, 1999)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In essence, the question of language acquisition asks how much language (\u039b) can be learned with a certain level of computational complexity (C) by being exposed to a certain type of data (I). The corresponding formalization, a : C \u00d7 I \u2192 \u039b, describes both human and artificial acquisition processes, and its components have been central in the linguistic debate. Below, we will discuss each term (C, I and \u039b) in further detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Framework", "sec_num": "3" }, { "text": "Our aim is to test how much grammatical structure can be induced from linguistic input through a pattern-finding mechanism such as that provided by ANNs. Therefore, we fix the level of computational complexity to a vanilla, character-based LSTM, which we train exploring different sources of input in a specific range {I i }, selected based on their complexity level. We then use the trained model to generate some amount of text (to babble), to explore the structure of the produced output \u2208 \u039b, mainly with respect to productivity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational complexity of the acquisition mechanism (C)", "sec_num": "3.1" }, { "text": "(LST M, I i ) a \u2212 \u2192 i (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computational complexity of the acquisition mechanism (C)", "sec_num": "3.1" }, { "text": "Our choice of model has consequences from a theoretical point of view. Different stances have been taken about how much has to be hard-coded or innate in order for language acquisition to happen: while formal innatist theories have always posited the need for a specialized and innate ability, a dedicated device for language learning (Chomsky, 1981 (Chomsky, , 1995 Hauser et al., 2002) , cognitive theories have argued for a more systemic vision, showing how general purpose memory and cognitive mechanisms can account for the emergence of linguistic abilities (Tomasello, 2003; Goldberg, 2006; Christiansen and Chater, 2016; Cornish et al., 2017; Lewkowicz et al., 2018) .", "cite_spans": [ { "start": 335, "end": 349, "text": "(Chomsky, 1981", "ref_id": "BIBREF8" }, { "start": 350, "end": 366, "text": "(Chomsky, , 1995", "ref_id": "BIBREF9" }, { "start": 367, "end": 387, "text": "Hauser et al., 2002)", "ref_id": "BIBREF31" }, { "start": 563, "end": 580, "text": "(Tomasello, 2003;", "ref_id": "BIBREF69" }, { "start": 581, "end": 596, "text": "Goldberg, 2006;", "ref_id": "BIBREF26" }, { "start": 597, "end": 627, "text": "Christiansen and Chater, 2016;", "ref_id": "BIBREF11" }, { "start": 628, "end": 649, "text": "Cornish et al., 2017;", "ref_id": "BIBREF13" }, { "start": 650, "end": 673, "text": "Lewkowicz et al., 2018)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Computational complexity of the acquisition mechanism (C)", "sec_num": "3.1" }, { "text": "LSTMs, under this perspective, can be seen as a domain-general attention and memory mecha-nism, without any explicitly hard-coded grammatical knowledge. They have been applied, without substantial modifications, to a variety of tasks, ranging from time series prediction to object cosegmentation, and encompassing grammar learning as well. On the continuum between specialized devices and general purpose associative mechanisms, LSTMs place themselves on the latter side, with their recurrent structure seeming to be crucial in the linguistic abstraction process (Tran et al., 2018) .", "cite_spans": [ { "start": 563, "end": 582, "text": "(Tran et al., 2018)", "ref_id": "BIBREF70" } ], "ref_spans": [], "eq_spans": [], "section": "Computational complexity of the acquisition mechanism (C)", "sec_num": "3.1" }, { "text": "Because of the traditional sharp distinction between competence and performance, the role of the input and the linguistic environment has been minimized by theories in the realm of Universal Grammar (UG). Usage-based theories, on the other hand, have granted the input a central role to the end of explaining why language is structured as it is (Fillmore, 1988; Kay and Fillmore, 1999; Hoffmann et al., 2013; Christiansen and Chater, 2016; Goldberg, 2019) : one of the striking points to make here is that in the usage-based framework the acquisition problem is framed as an incremental process. Acquiring language essentially entails learning how to process the linguistic input in an error-driven procedure, where full linguistic creativity and productivity are acquired gradually by speakers (Bannard et al., 2009) , building up on knowledge about specific items and restricted abstractions.", "cite_spans": [ { "start": 345, "end": 361, "text": "(Fillmore, 1988;", "ref_id": "BIBREF21" }, { "start": 362, "end": 385, "text": "Kay and Fillmore, 1999;", "ref_id": "BIBREF37" }, { "start": 386, "end": 408, "text": "Hoffmann et al., 2013;", "ref_id": "BIBREF34" }, { "start": 409, "end": 439, "text": "Christiansen and Chater, 2016;", "ref_id": "BIBREF11" }, { "start": 440, "end": 455, "text": "Goldberg, 2019)", "ref_id": "BIBREF27" }, { "start": 795, "end": 817, "text": "(Bannard et al., 2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Structure and role of the input (I)", "sec_num": "3.2" }, { "text": "In this sense, the specific features of the language on which ANNs are trained cannot be overlooked when it comes to describing their acquired grammatical abilities. Compared to what a child is exposed to during the most crucial months of language acquisition, ANNs are trained on an input that is often unrealistic in size: the LSTM introduced in Gulordava et al. (2018) is for example exposed to 90M tokens, and sees them multiple times over training. It is hard to come up with a precise estimate of the amount of language children are exposed to during the years of acquisition, as the variation depends on a huge number of factors including the socio-economic environment (Bee et al., 1969) or the societal organization (Cristia et al., 2019) . Hart and Risley (1995) , in a seminal work, estimate that, by the age of 3, welfare children have heard about 10 millions words while the average working-class child has heard around 30 millions. Finally, the domain of the data also matters: child-directed language is characterized by specific features (Matthews and Bannard, 2010) that are not present in the most widely used corpora. 1", "cite_spans": [ { "start": 348, "end": 371, "text": "Gulordava et al. (2018)", "ref_id": "BIBREF28" }, { "start": 677, "end": 695, "text": "(Bee et al., 1969)", "ref_id": "BIBREF5" }, { "start": 725, "end": 747, "text": "(Cristia et al., 2019)", "ref_id": "BIBREF15" }, { "start": 750, "end": 772, "text": "Hart and Risley (1995)", "ref_id": "BIBREF30" }, { "start": 1054, "end": 1082, "text": "(Matthews and Bannard, 2010)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Structure and role of the input (I)", "sec_num": "3.2" }, { "text": "Any analysis of the language \u039b generated by a learner implies the availability of a representation. Much has been written on the respective benefits of various representations of linguistic structures: the exact nature of their shape and content is the ultimate conundrum of linguistic theory. Of course, this paper is not the place to review the wide variations that exists among theories, so we will just limit ourselves to motivate our choice with respect to the broader theoretical framework. Constituency-based representations have been prevalent in the description of natural language syntax, becoming primarily associated with derivational theories. Due to the Fregean view of compositionality, they have also become the natural building blocks for meaning composition. Dependency representations have, on the other hand, re-gained popularity over constituency representations in the last decades, showing desirable properties from a computational perspective (they adapt to a wider array of languages, representing ill-formed sentences results easier and the output is more easily incorporated in semantic graphs) while taking a more functional approach to language description, more in line with cognition oriented-approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "In order to represent the features of \u2208 \u039b, we choose a representation which makes the least possible assumptions on the acquisition process and on the content of the generated language, and is at the same time flexible and computationally tractable. We therefore rely on dependency representations, more specifically the universal dependencies framework (Nivre et al., 2020) , from which we extract subtrees called catenae . As we will see below, the notion of catena is more flexible than that of constituent, and allows us to describe a larger set of generalizations.", "cite_spans": [ { "start": 354, "end": 374, "text": "(Nivre et al., 2020)", "ref_id": "BIBREF57" } ], "ref_spans": [], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "Generally speaking, CxG approaches seem to lack a shared representational framework 2 , relying on box diagrams or Attribute-Value Matrices to describe the traits of the fragments they study. The structures introduced by Osborne (2006) are characterized instead as fundamental meaning-bearing units , in line with the theoretical tenets of CxGs, thus being ideal candidates for the lexicon (or 'Constructicon') postulated in such theories: catenae have in fact been applied in the description of construction-like structures Dunn, 2017) and allow for the representation of non-adjacent structures while encompassing the notion of constituent as well (Osborne, 2006 (Osborne, , 2018 . A catena is defined as \"a word, or a combination of words which is continuous with respect to dominance\" : given a dependency tree, this definition selects a broader set of elements than the definition of constituent 3 . Unlike constituents, catenae can include both contiguous and non contiguous words. They however capture something more refined than generic subsets of sentence items, as the elements are grouped depending on the syntactic links holding in the sentence.", "cite_spans": [ { "start": 221, "end": 235, "text": "Osborne (2006)", "ref_id": "BIBREF59" }, { "start": 525, "end": 536, "text": "Dunn, 2017)", "ref_id": "BIBREF19" }, { "start": 650, "end": 664, "text": "(Osborne, 2006", "ref_id": "BIBREF59" }, { "start": 665, "end": 681, "text": "(Osborne, , 2018", "ref_id": "BIBREF62" } ], "ref_spans": [], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "From a graph-theory perspective, catenae form subtrees (i.e., subsets of nodes and edges that constitute a tree themselves) of the original tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "Let's consider for example the structures represented in Figures 1a, 1b and 1c : the same elements (nodes A to G) are arranged differently in the structure of dependency tree, and this leads to a different number and composition of catenae.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 78, "text": "Figures 1a, 1b and 1c", "ref_id": null } ], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "As a concrete example, Figure 2 represents a dependency tree, and Table 1 the structures that can be extracted from it: considering the lexical level, we can extract Mary had lamb, had a lamb, a little lamb as catenae. As the morpho-syntactic and syntactic levels are available, however, we can also extract partially filled structures as Mary had NOUN, nsubj VERB dobj and so on.", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 31, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 66, "end": 73, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "Of interest for our analysis, CxG argues that grammar items above the lexical level bear meaning themselves, and that this emerges from patterns of usage. According to Goldberg (2006) , for example, the meaning of the ditransitive pattern Sbj", "cite_spans": [ { "start": 168, "end": 183, "text": "Goldberg (2006)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "V A B C D E F G ROOT (a)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "The case of a flat structure, where all nodes are linked to the root: from a tree like this we can extract 2 6 \u2212 1 catenae, each one containing A plus a subset of its children nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shape and features of the generated language (\u039b)", "sec_num": "3.3" }, { "text": "(b) The case where nodes are arranged in a full dependency chain: here the number of catenae corresponds to the number of substrings that could be extracted from the linear signal, that is 20.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A B C D E F G ROOT", "sec_num": null }, { "text": "(c) The case of a hierarchical structure, typically what we would find in linguistic trees, where the counts are less trivial to make. In particular, for each node we find that the number of catenae rooted in that node can be estimated depending on the number of catenae rooted in his children nodes, and depends therefore on the specific structure of the tree. Obj Obj2, and thus its productivity, emerges from its strong association with give in child-directed speech: part of the meaning of give remains attached to the construction. A natural, and promising (Rambelli et al., 2019) , solution to represent the semantics of catenae is given by Distributional Semantics (Harris, 1954) , where each element of the 'Constructicon' is implicitly described in terms of its context of use (Erk, 2012; Lenci, 2018) . We will see in \u00a76 how we can use such distributional representations to investigate the level of abstraction of our network's babbling. These different corpora vary in size: for our experiments we randomly (with uniform probability) extract sentences from each source so that the total number of tokens approximates 3 millions (10% are kept for validation and 10% for testing).", "cite_spans": [ { "start": 562, "end": 585, "text": "(Rambelli et al., 2019)", "ref_id": "BIBREF63" }, { "start": 672, "end": 686, "text": "(Harris, 1954)", "ref_id": "BIBREF29" }, { "start": 786, "end": 797, "text": "(Erk, 2012;", "ref_id": "BIBREF20" }, { "start": 798, "end": 810, "text": "Lenci, 2018)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "A B C D E F G ROOT", "sec_num": null }, { "text": "For each of the considered corpora, we train a character-based LSTM on the tokenized, raw text. To do so, we slightly modify the PyTorch implementation of a vanilla LSTM. 6 , adapting it to a character-based setting. We run a Bayesian optimization process (Nogueira, 2014-) to select the best hyperparameters for the corpus (values can be found in the supplementary material). We then produce a model every 5 epochs of training (for a total of 7 models for CHILDES, 9 models for Open Subtitles and 7 models for simple Wikipedia), as to be able to produce snapshots of the network's abilities at different stages during training. For each of the saved models, we sample 7 utterances until we reach the size of the input (the 'babbling' stage). An example of babbling is reported in Table 2 .", "cite_spans": [ { "start": 256, "end": 273, "text": "(Nogueira, 2014-)", "ref_id": "BIBREF58" } ], "ref_spans": [ { "start": 781, "end": 788, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Language models", "sec_num": "4.2" }, { "text": "As introduced in \u00a7 3, the outcome of the acquisition process is a language sample i , that we want to compare to the input language I i or to other language samples j produced at different stages of acquisition. For the next steps, both the input text She is a former municipality in the center of an arrondissement in the southwest of France . Table 2 : Examples from input text and babbling produced by the best model, for each corpus. Sentences have been sampled according to the distribution of sentence lengths in the data.", "cite_spans": [], "ref_spans": [ { "start": 345, "end": 352, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Extracting catenae", "sec_num": "4.3" }, { "text": "(the corpus) and the network's babbling are linguistically processed and annotated up to the syntactic level with the UDPipe toolkit (Straka and Strakov\u00e1, 2017 ) (a schema of the full processing pipeline is presented in Figure 3 ). Since our aim is to monitor the syntactic behaviour of the network throughout learning, we extract catenae from the input corpus and from each babbling stage. To do so, we perform a recursive depth-first visit of dependency trees (pseudocode is provided in the supplementary material). That is, if the node A is a leaf, then the only possible catena is the one containing A itself; otherwise, all catenae rooted in A are formed by A plus a (eventually empty) combination of catenae rooted in its children nodes. With this procedure, we extract catenae from sentences (with length between 1 and 25). For efficiency reasons, we exclude catenae longer than 5 elements. Many structures are generated, not all of which are relevant: since we see catenae as pieces of the lexicon, frequency is not the only relevant parameter and elements should be positively associated in order to be recorded as objects. We therefore weigh the produced structures with a multivariate version of Mutual Information (MI), based on Van de Cruys (2011):", "cite_spans": [ { "start": 133, "end": 159, "text": "(Straka and Strakov\u00e1, 2017", "ref_id": "BIBREF67" } ], "ref_spans": [ { "start": 220, "end": 228, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Extracting catenae", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "M I(x1, ..., xn) = f (x1, ..., xn) log 2 p(x1, ..., xn) n i=1 p(xi)", "eq_num": "(2)" } ], "section": "Extracting catenae", "sec_num": "4.3" }, { "text": "where ...,ym) . Table 3 shows some of the structures with highest and lowest MI: from a qualitative perspective, it is evident that the measure is able to isolate linguistically relevant patterns, such as the basic intransitive and transitive structures (@nsubj @root and @nsubj VERB @obj).", "cite_spans": [ { "start": 6, "end": 13, "text": "...,ym)", "ref_id": null } ], "ref_spans": [ { "start": 16, "end": 23, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Extracting catenae", "sec_num": "4.3" }, { "text": "p(x 1 , ..., x m ) = f (x 1 ,...,xm) (y 1 ,...,ym) f (y 1 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting catenae", "sec_num": "4.3" }, { "text": "It is important to remark that the linguistic annotation process (except for the tokenization step) and the catenae extraction processes are completely independent from the language modeling performed by the LSTM, which is only fed with raw text and is therefore completely agnostic about the linguistic categories superimposed by the parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting catenae", "sec_num": "4.3" }, { "text": "Our first analysis demonstrates that the language generated by the LSTM reproduces the distribution of the input, and that this happens well beyond the lexical level: in other words, the network has acquired statistical regularities at the level of grammatical patterns, and is able to use them productively to generate novel language fragments that adhere to the same distribution as the input. Fig. 4 shows the extent of this approximation for various pairs: (i) ( c i , c j ) \u2208 c 1...k (language fragments output by a particular stage of babbling, for each corpus c), (ii) ( c i , I c ), c i \u2208 c 1...k (fragments output by a particular stage of babbling, compared to those extracted from the respective input c), (iii) (I c i , I c j ), (BM c i , BM c j ), (I c i , BM c j ) (fragments extracted from the input or the best babbling stage, compared among different corpora c i , c j ). It emerges from the plot that correlations are very high within each corpus (on average, 0.935 for Figure 3 : The figure depicts the processing pipeline used for the experiments: raw text from corpora serves as input to the LSTM, that in turn produces raw text at different training stages (i.e., the babbling). Both the corpus and the babbling texts are then processed with a NLP pipeline in order to build treebanks, from which catenae are then extracted. These extracted structures form the constructicons, which are compared in the experiments described in Section 5 (dashed line). The structures in each constructicon are then represented in a Distributional Semantic Model, through co-occurrences extracted from the respective treebanks. The distributional semantic models are then used for the experiments in \u00a7 6 (dashed line). CHILDES, 0.929 for OpenSubtitles and 0.917 for Simple Wikipedia). In particular, the correlations between the best models (BM ) and the respective input series (I) show values that are among the highest, demonstrating that the network acquires structures and reproduces them with a distribution that almost perfectly matches the input. On the other hand, it is clear that different corpora show different distributions, as correlations between pairs of input series I and best models show much lower values 8 . Overall, CHILDES scores the best correlation values, probably due to the specific features of child-directed speech, specifically its repetitiousness Clark (2009) . Open-Subtitles interestingly shows intermediate properties, sharing quite a lot of catenae with CHILDES, 9 while Simple Wikipedia shows a completely different distribution.", "cite_spans": [ { "start": 2384, "end": 2396, "text": "Clark (2009)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 396, "end": 402, "text": "Fig. 4", "ref_id": "FIGREF2" }, { "start": 987, "end": 995, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "What do ANNs approximate?", "sec_num": "5" }, { "text": "Our second analysis relies on the idea that we can state that the network has learned some grammar once it is able to use an acquired pattern in a pro- ductive and creative way. Following the basic hypothesis of CxG, stated in \u00a7 3.3, we expect this generalization ability to evolve during training and the distributional properties of patterns to be in relation with the grammatical abilities of the network at various stages of learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meaning and abstraction", "sec_num": "6" }, { "text": "Let's consider the structures cat 1 : the dog and cat 2 : DET NOUN. For the purpose of our analysis, we will consider (cat 1 , cat 2 ) to be a minimal pair, as the dog can be considered a lexicalized instance of the more abstract construction DET NOUN. Using a distributional analysis, we can capture how the contexts of cat 1 and cat 2 vary, and how this variation is associated with generalization. If their cosine similarity decreases during training, it means that their contexts become more and more dissimilar: the network produces DET NOUN in new contexts which do not perfectly overlap with those of the dog, indicating that the network's babbling is becoming more productive (a graphical representation is given in Figure 5 ). In this case, we theorize that cat 2 has been recognised as a partially independent pattern from cat 1 . If, on the contrary, their cosine similarity increases, we might deduce that the network has recognized cat 2 as partly unnecessary: it is correcting an overgeneralization.", "cite_spans": [], "ref_spans": [ { "start": 724, "end": 732, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Meaning and abstraction", "sec_num": "6" }, { "text": "We restrict this analysis to the CHILDES corpus. We build distributional vector spaces for the input and each stage of babbling using the DISSECT toolkit (Dinu et al., 2013) . We consider catenae Figure 5 : Let us assume that the input presents various lexicalized instances of the pattern DET NOUN (e.g. the dog, the cat, a giraffe). Our hypothesis is that the network will only be able to capture its more stereotypical instances (i.e., the dog), and the distributions of the dog and DET NOUN will thus almost perfectly overlap in the first stages of babbling (the length of vectors in the figure is just for exemplification). At later stages, the language produced by the network will show greater traits of productivity: the distribution of DET NOUN might show that its cosine distance to the dog has increased as it is now instantiated by two different lexicalized patterns (the dog and the cat) that are produced in dissimilar contexts.", "cite_spans": [ { "start": 154, "end": 173, "text": "(Dinu et al., 2013)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 196, "end": 204, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Meaning and abstraction", "sec_num": "6" }, { "text": "composed by 2 or 3 elements as targets/contexts, and define co-occurrence as the presence of two catenae in the same sentence. Co-occurrences are weighted with PPMI and the space reduced to 300 dimensions with SVD. We then extract minimal pairs (cat 1 , cat 2 ) of catenae from the input text, where cat 1 is an instance of cat 2 . For each pair, we compute their cosine similarity in all distributional spaces, and the difference in cosine between the last and first babbling (see Table 4 ).", "cite_spans": [], "ref_spans": [ { "start": 482, "end": 489, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Meaning and abstraction", "sec_num": "6" }, { "text": "We then compute average distributional shifts and cosine similarities, grouping all pairs by cat 1 and cat 2 values (for instance, we average all pairs that show abstractions of cat 1 : a minute, as well as pairs that show instantiations of cat 2 : DET NOUN). Some averages are shown in Table 5 .", "cite_spans": [], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Meaning and abstraction", "sec_num": "6" }, { "text": "We finally split catenae in three bins based on average distributional shift and investigate the influence of input similarity over the abstraction behaviour of a construction. Our hypothesis is that catenae that underwent the highest shifts during training were those showing intermediate levels of similarities in the input distributional space. Indeed, pairs with very high input similarities are unlikely to exhibit abstraction: according to constructionist intuition, their distributional similarity means that the catena that is part of the Constructicon is the least abstract one, and there is no need for the more abstract category. Low similarity pairs, on the other hand, may simply contain unrelated catenae.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Meaning and abstraction", "sec_num": "6" }, { "text": "To test our hypothesis, we perform a Kruskall- Wallis one-way analysis of variance test, that turn out to be significant for groupings made on both cat 1 and cat 2 lists. 10 The result is confirmed by Dunn's posthoc test. We show results for the test performed on the cat 2 list in Table 6 and Figure 6 .", "cite_spans": [], "ref_spans": [ { "start": 282, "end": 289, "text": "Table 6", "ref_id": "TABREF8" }, { "start": 294, "end": 302, "text": "Figure 6", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Meaning and abstraction", "sec_num": "6" }, { "text": "Usage-based computational accounts have already shown to be able to explain puzzling phenomena in acquisition (Freudenthal et al., 2015; McCauley and Christiansen, 2019) or to induce syntactic rules in an unsupervised manner (Solan et al., 2005) , making use of surface properties of the language signal like transitional probabilities or basic distributional analysis. However, despite being rooted in the psychological literature and yielding fundamental psycholinguistic results, the models presented in such investigations are often not comparable to studies involving neural language models, as the former are usually less flexible and less scalable to large amounts of data than the latter. In this paper, we have reviewed relevant work concerning the assessment of grammatical abilities in neural language models and noted the lack of variety in both the input data fed to ANNs (I) and the theoretical framework used in analysing the output language (\u039b). In line with the existing usagebased computational accounts, we have introduced a methodology to evaluate the level of productivity of an LSTM trained on limited, child-directed data, using inspirations from constructionist approaches.", "cite_spans": [ { "start": 110, "end": 136, "text": "(Freudenthal et al., 2015;", "ref_id": "BIBREF22" }, { "start": 137, "end": 169, "text": "McCauley and Christiansen, 2019)", "ref_id": "BIBREF53" }, { "start": 225, "end": 245, "text": "(Solan et al., 2005)", "ref_id": "BIBREF66" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and future work", "sec_num": "7" }, { "text": "We have been able to show that neural networks approximate the distribution of constructions at a quite refined level when trained over a bare 3M words from the CHILDES corpus, reproducing the distribution of grammatical patterns even when they are not fully lexicalized. The analysis in \u00a7 5 indicates that the linguistic variety of OpenSubtitles is a potentially relevant benchmark to further investigate language acquisition, due to its similarity to the CHILDES data. In contrast, Simple Wikipedia has proved to be dissimilar to child-directed speech. This large difference should be taken into consideration when it comes to evaluating the grammatical abilities on the network: many of the studies cited in \u00a7 2 use models trained on Wikipedia or similar varieties, which may complicate the acquisition of generic grammatical phenomena heavily present in child-directed language. The analysis in \u00a7 6 further illustrated how we can follow paths of abstraction by putting our grammar formalism in a vector space. Additional investigations are of course needed to confirm our results. In particular, we would like to target the behavior of some specific sets of structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and future work", "sec_num": "7" }, { "text": "Most importantly, the introduced methodology, despite being preliminary, presents a number of features that make our study fit in the usage-based theoretical framework while also using neural networks as language modeling tools, more specifically: (i) it posits no sharp distinction between lexicon and grammar: fully lexicalized, partially filled and purely syntactic patterns are all part of our constructicon and can play a similar role in production. Different items can therefore be represented compared, irrespective of their lexical nature; (ii) it makes no assumption about the stability of the constructicon: what is relevant for productivity at the earliest stages of learning might become superfluous later on; (iii) all items are seen as formmeaning pairs (i.e., constructions by definition, as in Goldberg, 2006) : a novel way of modeling constructional meaning is therefore introduced and represents a promising path for future studies; (iv) distributional semantics is used both as a powerful quantitative tool and as a usage-based cognitive hypothesis, which leads us to specific assumptions about the cognitive format and origin of semantic representations (Lenci, 2008) , and seems in line with the view of constructions as \"invitations to form categories\" (Goldberg, 2019) .", "cite_spans": [ { "start": 810, "end": 825, "text": "Goldberg, 2006)", "ref_id": "BIBREF26" }, { "start": 1174, "end": 1187, "text": "(Lenci, 2008)", "ref_id": "BIBREF40" }, { "start": 1275, "end": 1291, "text": "(Goldberg, 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and future work", "sec_num": "7" }, { "text": "Finally, we must account for potential biases introduced by applying dependency parsing to both input data and neural babbling: while this step is necessary to extract catenae, it introduces a non-negligible amount of noise, as the available pipelines are typically trained on different varieties than the ones considered in this study. In particular, the parser is somehow projecting its own categories, which have been acquired in a different setting and probably on a different variety, on our data. This currently limits the transferability of our results. Besides looking for ways to circumvent this issue, further work includes a comparison of our results with a wider choice of models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and future work", "sec_num": "7" }, { "text": "Specifically, those that contain data harvested from the web such as Wikipedia or UKWaC.2 an exception should be made for the formalisms derived from the FrameNet project (https://framenet.icsi. berkeley.edu/)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "which can be seen as a subtype of catena as \"A catena that consists of a word plus all the words that that word dominates\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.themoviedb.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://simple.wikipedia.org/ 6 https://github.com/pytorch/examples/ tree/master/word_language_model 7 The sampling happens as follows: a random initial letter is picked, with a probability depending on the distribution of letters at the beginning of sentences in the input data, then letters are sampled with a greedy algorithm until an end of sentence marker is reached or the length surpasses the average sentence length of the input plus 2 standard deviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The complete set of correlation values is reported in supplementary material9 The Jaccard index between CHILDES and OpenSubtitles remains above 0.5, even when considering the top 1M catenae, while the same index computed between CHILDES and Simple Wikipedia drops to around 0.13.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "p = 6.988142426844016e-28 for cat1 and p = 7.420868598608134e-32 for cat2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Dr. Lucia Busso for useful discussions on an earlier version of this work and Lucio Messina for helping us in condensing results into figures and tables. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. We would also like to thank the anonymous reviewers for their helpful suggestions and comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Fine-grained analysis of sentence embeddings using auxiliary prediction tasks", "authors": [ { "first": "Yossi", "middle": [], "last": "Adi", "suffix": "" }, { "first": "Einat", "middle": [], "last": "Kermany", "suffix": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Lavi", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learn-ingRepresentations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained anal- ysis of sentence embeddings using auxiliary predic- tion tasks. In International Conference on Learn- ingRepresentations.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analyzing and interpreting neural networks for nlp: A report on the first blackboxnlp workshop", "authors": [ { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "" }, { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Natural Language Engineering", "volume": "25", "issue": "4", "pages": "543--557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Afra Alishahi, Grzegorz Chrupa\u0142a, and Tal Linzen. 2019. Analyzing and interpreting neural networks for nlp: A report on the first blackboxnlp workshop. Natural Language Engineering, 25(4):543-557.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Does bert agree? evaluating knowledge of structure dependence through agreement relations", "authors": [ { "first": "Geoff", "middle": [], "last": "Bacon", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Regier", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.09892" ] }, "num": null, "urls": [], "raw_text": "Geoff Bacon and Terry Regier. 2019. Does bert agree? evaluating knowledge of structure depen- dence through agreement relations. arXiv preprint arXiv:1908.09892.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Modeling children's early grammatical knowledge", "authors": [ { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Lieven", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Tomasello", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the National Academy of Sciences", "volume": "106", "issue": "41", "pages": "17284--17289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Bannard, Elena Lieven, and Michael Tomasello. 2009. Modeling children's early grammatical knowledge. Proceedings of the National Academy of Sciences, 106(41):17284-17289.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Linguistic generalization and compositionality in modern artificial neural networks", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": null, "venue": "Philosophical Transactions of the Royal Society B", "volume": "375", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni. 2020. Linguistic generalization and compositionality in modern artificial neural net- works. Philosophical Transactions of the Royal So- ciety B, 375(1791):20190307.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Social class differences in maternal teaching strategies and speech patterns", "authors": [ { "first": "L", "middle": [], "last": "Helen", "suffix": "" }, { "first": "", "middle": [], "last": "Bee", "suffix": "" }, { "first": "Ann", "middle": [ "Pytkowicz" ], "last": "Lawrence F Van Egeren", "suffix": "" }, { "first": "", "middle": [], "last": "Streissguth", "suffix": "" }, { "first": "Maxine", "middle": [ "S" ], "last": "Barry A Nyman", "suffix": "" }, { "first": "", "middle": [], "last": "Leckie", "suffix": "" } ], "year": 1969, "venue": "Developmental Psychology", "volume": "1", "issue": "6p1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Helen L Bee, Lawrence F Van Egeren, Ann Pytkow- icz Streissguth, Barry A Nyman, and Maxine S Leckie. 1969. Social class differences in maternal teaching strategies and speech patterns. Develop- mental Psychology, 1(6p1):726.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Review of skinner's verbal behaviour. Language", "authors": [ { "first": "Noam", "middle": [], "last": "Chomksy", "suffix": "" } ], "year": 1959, "venue": "", "volume": "35", "issue": "", "pages": "26--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomksy. 1959. Review of skinner's verbal be- haviour. Language, 35:26-58.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Language and Mind", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky. 1968. Language and Mind. New York: Harcourt Brace Jovanovich.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lectures on government and binding. Dordrecht: Foris", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky. 1981. Lectures on government and binding. Dordrecht: Foris.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The minimalist program", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky. 1995. The minimalist program. MIT Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Rnn simulations of grammaticality judgments on long-distance dependencies", "authors": [ { "first": "Absar", "middle": [], "last": "Shammur", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Chowdhury", "suffix": "" }, { "first": "", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th international conference on computational linguistics", "volume": "", "issue": "", "pages": "133--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shammur Absar Chowdhury and Roberto Zamparelli. 2018. Rnn simulations of grammaticality judgments on long-distance dependencies. In Proceedings of the 27th international conference on computational linguistics, pages 133-144.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Creating language: Integrating evolution, acquisition, and processing", "authors": [ { "first": "H", "middle": [], "last": "Morten", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Christiansen", "suffix": "" }, { "first": "", "middle": [], "last": "Chater", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morten H Christiansen and Nick Chater. 2016. Cre- ating language: Integrating evolution, acquisition, and processing. MIT Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "First language acquisition", "authors": [ { "first": "V", "middle": [], "last": "Eve", "suffix": "" }, { "first": "", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eve V Clark. 2009. First language acquisition. Cam- bridge University Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Sequence memory constraints give rise to language-like structure through iterated learning", "authors": [ { "first": "Hannah", "middle": [], "last": "Cornish", "suffix": "" }, { "first": "Rick", "middle": [], "last": "Dale", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Kirby", "suffix": "" }, { "first": "Morten", "middle": [ "H" ], "last": "Christiansen", "suffix": "" } ], "year": 2017, "venue": "PloS one", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hannah Cornish, Rick Dale, Simon Kirby, and Morten H Christiansen. 2017. Sequence mem- ory constraints give rise to language-like structure through iterated learning. PloS one, 12(1).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Nature, nurture and universal grammar. Linguistics and philosophy", "authors": [ { "first": "Stephen", "middle": [], "last": "Crain", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Pietroski", "suffix": "" } ], "year": 2001, "venue": "", "volume": "24", "issue": "", "pages": "139--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Crain and Paul Pietroski. 2001. Nature, nur- ture and universal grammar. Linguistics and philos- ophy, 24(2):139-186.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Child-directed speech is infrequent in a forager-farmer population: a time allocation study", "authors": [ { "first": "Alejandrina", "middle": [], "last": "Cristia", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gurven", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Stieglitz", "suffix": "" } ], "year": 2019, "venue": "Child development", "volume": "90", "issue": "3", "pages": "759--773", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alejandrina Cristia, Emmanuel Dupoux, Michael Gur- ven, and Jonathan Stieglitz. 2019. Child-directed speech is infrequent in a forager-farmer popula- tion: a time allocation study. Child development, 90(3):759-773.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Two multivariate generalizations of pointwise mutual information", "authors": [ { "first": "Tim", "middle": [], "last": "Van De Cruys", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Workshop on Distributional Semantics and Compositionality", "volume": "", "issue": "", "pages": "16--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Van de Cruys. 2011. Two multivariate generaliza- tions of pointwise mutual information. In Proceed- ings of the Workshop on Distributional Semantics and Compositionality, pages 16-20. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Recurrent neural network language models always learn English-like relative clause attachment", "authors": [ { "first": "Forrest", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Marten", "middle": [], "last": "Van Schijndel", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1979--1990", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.179" ] }, "num": null, "urls": [], "raw_text": "Forrest Davis and Marten van Schijndel. 2020. Recur- rent neural network language models always learn English-like relative clause attachment. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1979-1990, Online. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "DISSECT -DIStributional SEmantics composition toolkit", "authors": [ { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "", "middle": [], "last": "Nghia The", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pham", "suffix": "" }, { "first": "", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "31--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013. DISSECT -DIStributional SEmantics com- position toolkit. In Proceedings of the 51st An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 31-36, Sofia, Bulgaria. Association for Computational Lin- guistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learnability and falsifiability of construction grammars", "authors": [ { "first": "Jonathan", "middle": [], "last": "Dunn", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Linguistic Society of America", "volume": "2", "issue": "", "pages": "1--1", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Dunn. 2017. Learnability and falsifiability of construction grammars. Proceedings of the Linguis- tic Society of America, 2:1-1.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Vector space models of word meaning and phrase meaning: A survey", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2012, "venue": "Language and Linguistics Compass", "volume": "6", "issue": "10", "pages": "635--653", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Erk. 2012. Vector space models of word mean- ing and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635-653.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The mechanisms of \"construction grammar", "authors": [ { "first": "J", "middle": [], "last": "Charles", "suffix": "" }, { "first": "", "middle": [], "last": "Fillmore", "suffix": "" } ], "year": 1988, "venue": "Annual Meeting of the Berkeley Linguistics Society", "volume": "14", "issue": "", "pages": "35--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles J Fillmore. 1988. The mechanisms of \"con- struction grammar\". In Annual Meeting of the Berke- ley Linguistics Society, volume 14, pages 35-55.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Simulating the crosslinguistic pattern of optional infinitive errors in children's declaratives and wh-questions", "authors": [ { "first": "Daniel", "middle": [], "last": "Freudenthal", "suffix": "" }, { "first": "M", "middle": [], "last": "Julian", "suffix": "" }, { "first": "Gary", "middle": [], "last": "Pine", "suffix": "" }, { "first": "Fernand", "middle": [], "last": "Jones", "suffix": "" }, { "first": "", "middle": [], "last": "Gobet", "suffix": "" } ], "year": 2015, "venue": "Cognition", "volume": "143", "issue": "", "pages": "61--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Freudenthal, Julian M Pine, Gary Jones, and Fernand Gobet. 2015. Simulating the cross- linguistic pattern of optional infinitive errors in chil- dren's declaratives and wh-questions. Cognition, 143:61-76.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information", "authors": [ { "first": "Mario", "middle": [], "last": "Giulianelli", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Harding", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Mohnert", "suffix": "" }, { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "" }, { "first": "Willem", "middle": [], "last": "Zuidema", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Un- der the hood: Using diagnostic classifiers to in- vestigate and improve how language models track agreement information. In Proceedings of the 2018", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "240--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 240- 248.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Constructions: A construction grammar approach to argument structure", "authors": [ { "first": "Adele", "middle": [ "E" ], "last": "Goldberg", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adele E Goldberg. 1995. Constructions: A construc- tion grammar approach to argument structure. Uni- versity of Chicago Press.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Constructions at work: The nature of generalization in language", "authors": [ { "first": "Adele", "middle": [ "E" ], "last": "Goldberg", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adele E Goldberg. 2006. Constructions at work: The nature of generalization in language. Oxford Uni- versity Press on Demand.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Explain me this: Creativity, competition, and the partial productivity of constructions", "authors": [ { "first": "Adele", "middle": [ "E" ], "last": "Goldberg", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adele E Goldberg. 2019. Explain me this: Creativity, competition, and the partial productivity of construc- tions. Princeton University Press.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Colorless green recurrent networks dream hierarchically", "authors": [ { "first": "Kristina", "middle": [], "last": "Gulordava", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "\u00c9douard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1195--1205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Gulordava, Piotr Bojanowski,\u00c9douard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Distributional structure. Word", "authors": [ { "first": "S", "middle": [], "last": "Zellig", "suffix": "" }, { "first": "", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1954, "venue": "", "volume": "10", "issue": "", "pages": "146--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Meaningful differences in the everyday experience of young American children", "authors": [ { "first": "Betty", "middle": [], "last": "Hart", "suffix": "" }, { "first": "", "middle": [], "last": "Todd R Risley", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Betty Hart and Todd R Risley. 1995. Meaningful differ- ences in the everyday experience of young American children. Paul H Brookes Publishing.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "The faculty of language: what is it, who has it, and how did it evolve? science", "authors": [ { "first": "D", "middle": [], "last": "Marc", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Hauser", "suffix": "" }, { "first": "W Tecumseh", "middle": [], "last": "Chomsky", "suffix": "" }, { "first": "", "middle": [], "last": "Fitch", "suffix": "" } ], "year": 2002, "venue": "", "volume": "298", "issue": "", "pages": "1569--1579", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc D Hauser, Noam Chomsky, and W Tecumseh Fitch. 2002. The faculty of language: what is it, who has it, and how did it evolve? science, 298(5598):1569-1579.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A structural probe for finding syntax in word representations", "authors": [ { "first": "John", "middle": [], "last": "Hewitt", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4129--4138", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129-4138.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Constructions in the parallel architecture", "authors": [ { "first": "Thomas", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Graeme", "middle": [], "last": "Trousdale", "suffix": "" }, { "first": "Ray", "middle": [], "last": "Jackendoff", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Hoffmann, Graeme Trousdale, and Ray Jack- endoff. 2013. Constructions in the parallel architec- ture.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A systematic assessment of syntactic generalization in neural language models", "authors": [ { "first": "Jennifer", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Gauthier", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger P", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.03692" ] }, "num": null, "urls": [], "raw_text": "Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger P Levy. 2020. A systematic assessment of syntactic generalization in neural language mod- els. arXiv preprint arXiv:2005.03692.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "What does bert learn about the structure of language", "authors": [ { "first": "Ganesh", "middle": [], "last": "Jawahar", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3651--3657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does bert learn about the structure of language? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3651-3657.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Grammatical constructions and linguistic generalizations: the what's x doing y? construction. Language", "authors": [ { "first": "Paul", "middle": [], "last": "Kay", "suffix": "" }, { "first": "J", "middle": [], "last": "Charles", "suffix": "" }, { "first": "", "middle": [], "last": "Fillmore", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "1--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Kay and Charles J Fillmore. 1999. Grammati- cal constructions and linguistic generalizations: the what's x doing y? construction. Language, pages 1-33.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better", "authors": [ { "first": "Adhiguna", "middle": [], "last": "Kuncoro", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "John", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Yogatama", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1426--1436", "other_ids": { "DOI": [ "10.18653/v1/P18-1132" ] }, "num": null, "urls": [], "raw_text": "Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yo- gatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1426-1436, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "The emergence of number and syntax units in lstm language models", "authors": [ { "first": "Yair", "middle": [], "last": "Lakretz", "suffix": "" }, { "first": "Cognitive", "middle": [ "Neuroimaging" ], "last": "Unit", "suffix": "" }, { "first": "German", "middle": [], "last": "Kruszewski", "suffix": "" }, { "first": "Theo", "middle": [], "last": "Desbordes", "suffix": "" }, { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "" }, { "first": "Stanislas", "middle": [], "last": "Dehaene", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT", "volume": "", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yair Lakretz, Cognitive Neuroimaging Unit, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, and Marco Baroni. 2019. The emergence of number and syntax units in lstm lan- guage models. In Proceedings of NAACL-HLT, pages 11-20.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Distributional semantics in linguistic and cognitive research", "authors": [ { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2008, "venue": "Italian journal of linguistics", "volume": "20", "issue": "1", "pages": "1--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Lenci. 2008. Distributional semantics in linguistic and cognitive research. Italian journal of linguistics, 20(1):1-31.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Distributional models of word meaning", "authors": [ { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2018, "venue": "Annual review of Linguistics", "volume": "4", "issue": "", "pages": "151--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Lenci. 2018. Distributional models of word meaning. Annual review of Linguistics, 4:151-171.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Representations of syntax", "authors": [ { "first": "Tal", "middle": [], "last": "Michael A Lepori", "suffix": "" }, { "first": "R Thomas", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "", "middle": [], "last": "Mccoy", "suffix": "" } ], "year": 2020, "venue": "Effects of constituency and dependency structure in recursive lstms", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00019" ] }, "num": null, "urls": [], "raw_text": "Michael A Lepori, Tal Linzen, and R Thomas McCoy. 2020. Representations of syntax [mask] useful: Ef- fects of constituency and dependency structure in re- cursive lstms. arXiv preprint arXiv:2005.00019.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Learnability and the statistical structure of language: Poverty of stimulus arguments revisited", "authors": [ { "first": "D", "middle": [], "last": "John", "suffix": "" }, { "first": "Jeffrey L", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "", "middle": [], "last": "Elman", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 26th annual Boston University conference on language development", "volume": "1", "issue": "", "pages": "359--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D Lewis and Jeffrey L Elman. 2001. Learnability and the statistical structure of language: Poverty of stimulus arguments revisited. In Proceedings of the 26th annual Boston University conference on lan- guage development, volume 1, pages 359-370. Cite- seer.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Learning of hierarchical serial patterns emerges in infancy", "authors": [ { "first": "J", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Lewkowicz", "suffix": "" }, { "first": "A", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Diane Mj", "middle": [], "last": "Schmuckler", "suffix": "" }, { "first": "", "middle": [], "last": "Mangalindan", "suffix": "" } ], "year": 2018, "venue": "Developmental psychobiology", "volume": "60", "issue": "3", "pages": "243--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "David J Lewkowicz, Mark A Schmuckler, and Di- ane MJ Mangalindan. 2018. Learning of hierarchi- cal serial patterns emerges in infancy. Developmen- tal psychobiology, 60(3):243-255.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Open sesame: Getting inside bert's linguistic knowledge", "authors": [ { "first": "Yongjie", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chern Tan", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "241--253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside bert's linguistic knowl- edge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241-253.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Syntactic structure from deep learning", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.10827" ] }, "num": null, "urls": [], "raw_text": "Tal Linzen and Marco Baroni. 2020. Syntactic structure from deep learning. arXiv preprint arXiv:2004.10827.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Assessing the ability of lstms to learn syntaxsensitive dependencies", "authors": [ { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2016, "venue": "Transactions of the Association for Computational Linguistics", "volume": "4", "issue": "", "pages": "521--535", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax- sensitive dependencies. Transactions of the Associa- tion for Computational Linguistics, 4:521-535.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Opensub-titles2016: Extracting large parallel corpora from movie and tv subtitles", "authors": [ { "first": "Pierre", "middle": [], "last": "Lison", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. Opensub- titles2016: Extracting large parallel corpora from movie and tv subtitles.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Universal grammar: The strong continuity hypothesis in first language acquisition. Handbook of Child Language Acquisition", "authors": [ { "first": "Barbara", "middle": [ "Lust" ], "last": "", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Lust. 1999. Universal grammar: The strong continuity hypothesis in first language acquisition. Handbook of Child Language Acquisition.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "The CHILDES Project: Tools for analyzing talk. Third Edition", "authors": [ { "first": "Brian", "middle": [], "last": "Macwhinney", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian MacWhinney. 2000. The CHILDES Project: Tools for analyzing talk. Third Edition. Lawrence Erlbaum Associates.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Targeted syntactic evaluation of language models", "authors": [ { "first": "Rebecca", "middle": [], "last": "Marvin", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1192--1202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Children's production of unfamiliar word sequences is predicted by positional variability and latent classes in a large sample of child-directed speech", "authors": [ { "first": "Danielle", "middle": [], "last": "Matthews", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Bannard", "suffix": "" } ], "year": 2010, "venue": "Cognitive science", "volume": "34", "issue": "3", "pages": "465--488", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danielle Matthews and Colin Bannard. 2010. Chil- dren's production of unfamiliar word sequences is predicted by positional variability and latent classes in a large sample of child-directed speech. Cognitive science, 34(3):465-488.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Language learning as language use: A cross-linguistic model of child language development", "authors": [ { "first": "M", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Morten", "middle": [ "H" ], "last": "Mccauley", "suffix": "" }, { "first": "", "middle": [], "last": "Christiansen", "suffix": "" } ], "year": 2019, "venue": "Psychological review", "volume": "126", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stewart M McCauley and Morten H Christiansen. 2019. Language learning as language use: A cross-linguistic model of child language develop- ment. Psychological review, 126(1):1.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Can connectionist models discover the structure of natural language. Minds, Brains and Computers", "authors": [ { "first": "L", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Mcclelland", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "168--189", "other_ids": {}, "num": null, "urls": [], "raw_text": "James L McClelland. 1992. Can connectionist models discover the structure of natural language. Minds, Brains and Computers, pages 168-189.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks", "authors": [ { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 40th Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recur- rent neural networks. In Proceedings of the 40th An- nual Conference of the Cognitive Science Society.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence networks", "authors": [ { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.03632" ] }, "num": null, "urls": [], "raw_text": "R Thomas McCoy, Robert Frank, and Tal Linzen. 2020. Does syntax need to grow on trees? sources of hier- archical inductive bias in sequence-to-sequence net- works. arXiv preprint arXiv:2001.03632.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Universal dependencies v2: An evergrowing multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hajic", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4034--4043", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Hajic, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4034-4043.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Bayesian Optimization: Open source constrained global optimization tool for Python", "authors": [ { "first": "Fernando", "middle": [], "last": "Nogueira", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fernando Nogueira. 2014-. Bayesian Optimization: Open source constrained global optimization tool for Python.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Beyond the constituent-a dependency grammar analysis of chains", "authors": [ { "first": "Timothy", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2006, "venue": "Folia Linguistica", "volume": "39", "issue": "3-4", "pages": "251--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Osborne. 2006. Beyond the constituent-a de- pendency grammar analysis of chains. Folia Lin- guistica, 39(3-4):251-297.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Constructions are catenae: Construction grammar meets dependency grammar", "authors": [ { "first": "Timothy", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Gro\u00df", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Osborne and Thomas Gro\u00df. 2012. Construc- tions are catenae: Construction grammar meets de- pendency grammar.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Catenae: Introducing a novel unit of syntactic analysis", "authors": [ { "first": "Timothy", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Putnam", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Gro\u00df", "suffix": "" } ], "year": 2012, "venue": "Syntax", "volume": "15", "issue": "4", "pages": "354--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Osborne, Michael Putnam, and Thomas Gro\u00df. 2012. Catenae: Introducing a novel unit of syntactic analysis. Syntax, 15(4):354-396.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Tests for constituents: What they really reveal about the nature of syntactic structure", "authors": [ { "first": "J", "middle": [], "last": "Timothy", "suffix": "" }, { "first": "", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2018, "venue": "Language Under Discussion", "volume": "5", "issue": "1", "pages": "1--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy J Osborne. 2018. Tests for constituents: What they really reveal about the nature of syntactic struc- ture. Language Under Discussion, 5(1):1-41.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Distributional semantics meets construction grammar. towards a unified usage-based model of grammar and meaning", "authors": [ { "first": "Giulia", "middle": [], "last": "Rambelli", "suffix": "" }, { "first": "Emmanuele", "middle": [], "last": "Chersoni", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Blache", "suffix": "" }, { "first": "Chu-Ren", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lenci", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the First International Workshop on Designing Meaning Representations", "volume": "", "issue": "", "pages": "110--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giulia Rambelli, Emmanuele Chersoni, Philippe Blache, Chu-Ren Huang, and Alessandro Lenci. 2019. Distributional semantics meets construction grammar. towards a unified usage-based model of grammar and meaning. In Proceedings of the First International Workshop on Designing Meaning Rep- resentations, pages 110-120.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Can lstm learn to capture agreement? the case of basque", "authors": [ { "first": "Shauli", "middle": [], "last": "Ravfogel", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "98--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shauli Ravfogel, Yoav Goldberg, and Francis Tyers. 2018. Can lstm learn to capture agreement? the case of basque. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 98-107.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Ordered neurons: Integrating tree structures into recurrent neural networks", "authors": [ { "first": "Yikang", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Shawn", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2018. Ordered neurons: Integrat- ing tree structures into recurrent neural networks. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Unsupervised learning of natural languages", "authors": [ { "first": "Zach", "middle": [], "last": "Solan", "suffix": "" }, { "first": "David", "middle": [], "last": "Horn", "suffix": "" }, { "first": "Eytan", "middle": [], "last": "Ruppin", "suffix": "" }, { "first": "Shimon", "middle": [], "last": "Edelman", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the National Academy of Sciences", "volume": "102", "issue": "33", "pages": "11629--11634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zach Solan, David Horn, Eytan Ruppin, and Shimon Edelman. 2005. Unsupervised learning of natural languages. Proceedings of the National Academy of Sciences, 102(33):11629-11634.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe", "authors": [ { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Strakov\u00e1", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "88--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milan Straka and Jana Strakov\u00e1. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "authors": [ { "first": "Ian", "middle": [], "last": "Tenney", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Poliak", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Najoung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" } ], "year": 2019, "venue": "7th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel Bowman, Dipanjan Das, et al. 2019. What do you learn from con- text? probing for sentence structure in contextual- ized word representations. In 7th International Con- ference on Learning Representations, ICLR 2019.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Constructing a language: A usage-based theory of language acquisition", "authors": [ { "first": "Michael", "middle": [], "last": "Tomasello", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Tomasello. 2003. Constructing a language: A usage-based theory of language acquisition. Har- vard University Press.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "The importance of being recurrent for modeling hierarchical structure", "authors": [ { "first": "Arianna", "middle": [], "last": "Ke M Tran", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4731--4736", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ke M Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hier- archical structure. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 4731-4736.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Blimp: A benchmark of linguistic minimal pairs for english", "authors": [ { "first": "Alex", "middle": [], "last": "Warstadt", "suffix": "" }, { "first": "Alicia", "middle": [], "last": "Parrish", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Anhad", "middle": [], "last": "Mohananey", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Sheng-Fu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Samuel R", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.00582" ] }, "num": null, "urls": [], "raw_text": "Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2019. Blimp: A benchmark of lin- guistic minimal pairs for english. arXiv preprint arXiv:1912.00582.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "What do rnn language models learn about filler-gap dependencies?", "authors": [ { "first": "Ethan", "middle": [], "last": "Wilcox", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Morita", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Futrell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "211--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ethan Wilcox, Roger Levy, Takashi Morita, and Richard Futrell. 2018. What do rnn language mod- els learn about filler-gap dependencies? In Proceed- ings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 211-221.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "StringsA, AB, ABC, ... B, BC, ...E Catenae A, B, C, D, E, AB, ABCE, ABDE, ABCDE, ABE, BCE,BDE, BE, CE, DE, CDE Constituents A, ABCDE, C, D, CDE", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "The dependency representation of the sentence Mary had a little lamb, annotated with morphosyntactic and syntactic information.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "Correlation values (Spearman \u03c1) over top 10K catenae for each corpus (OpenSubtitles in green on the left of the plot, CHILDES in red in the top right and Simple Wikipedia in yellow at the bottom) compared to the respective babbling (at intermediate stages of learning) and the best models (BM). The thickness of the connections is inversely proportional to correlation.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Distribution of average cosine similarities for the three groups of cat 2 , showing low, intermediate and high average shifts respectively.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "num": null, "text": "Possible structures that can be extracted from the dependency tree inFigure 2", "html": null, "type_str": "table", "content": "" }, "TABREF4": { "num": null, "text": "Examples of catenae extracted from CHILDES. Largest and smallest mutual information are reported, in top and bottom tier of the table respectively. Part of Speech are prefixed by \" \" and syntactic relations are prefixed by \"@\"", "html": null, "type_str": "table", "content": "
" }, "TABREF6": { "num": null, "text": "Pairs of catenae (cat 1 , cat 2 ), their cosine similarity in the space obtained from CHILDES, in the space obtained from the best model (BM) and in all the intermediate models. The last column shows the difference between cosine similarity at epoch 5 and cosine similarity at epoch 35.", "html": null, "type_str": "table", "content": "
cat1shiftcosinecat2shiftcosine
@nsubj @root so0.180.43more @root0.20.21
@nsubjonly0.180.41AUX know @obj0.190.66
@root
what @root @obj0.180.39@advmod tell0.170.64
what@advmod0.160.19@aux know @obj0.160.71
VERB
only @root0.160.38@advmodcan0.150.76
VERB
more @root0.160.23know @obj0.150.62
@root it @xcomp0.150.61a NOUN0.130.52
@det minute0.150.25might @root0.130.70
PRONonly0.150.53PRON @root n't0.120.53
@root
VERBDET0.150.33@root that VERB0.120.65
minute
PRON @root so0.140.54VERB'll0.120.71
@ccomp
DET minute0.1340.33VERB me @obl0.120.76
" }, "TABREF7": { "num": null, "text": "Catenae with highest average shifts.", "html": null, "type_str": "table", "content": "
negative nonepositive
negative -6.83e-06 4.57e-05
none0.000-4.15e-29
positive0.0004.15e-29 -
" }, "TABREF8": { "num": null, "text": "Dunn posthoc test on the three groups of c 2 , showing low (< \u22120.05), intermediate (\u22120.05 < x < 0.05) and high (< 0.05) average shifts respectively.", "html": null, "type_str": "table", "content": "" } } } }