{ "paper_id": "P14-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:03:48.793208Z" }, "title": "Modelling function words improves unsupervised word segmentation", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "", "affiliation": { "laboratory": "", "institution": "Macquarie University", "location": { "settlement": "Sydney", "country": "Australia" } }, "email": "" }, { "first": "Anne", "middle": [], "last": "Christophe", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ecole Normale Sup\u00e9rieure", "location": { "settlement": "Paris", "country": "France" } }, "email": "" }, { "first": "Katherine", "middle": [], "last": "Demuth", "suffix": "", "affiliation": { "laboratory": "", "institution": "Santa Fe Institute", "location": { "settlement": "Santa Fe", "region": "New Mexico", "country": "USA" } }, "email": "" }, { "first": "Emmanuel", "middle": [], "last": "Dupoux", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ecole Normale Sup\u00e9rieure", "location": { "settlement": "Paris", "country": "France" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Inspired by experimental psychological findings suggesting that function words play a special role in word learning, we make a simple modification to an Adaptor Grammar based Bayesian word segmentation model to allow it to learn sequences of monosyllabic \"function words\" at the beginnings and endings of collocations of (possibly multi-syllabic) words. This modification improves unsupervised word segmentation on the standard Bernstein-Ratner (1987) corpus of child-directed English by more than 4% token f-score compared to a model identical except that it does not special-case \"function words\", setting a new state-of-the-art of 92.4% token f-score. Our function word model assumes that function words appear at the left periphery, and while this is true of languages such as English, it is not true universally. We show that a learner can use Bayesian model selection to determine the location of function words in their language, even though the input to the model only consists of unsegmented sequences of phones. Thus our computational models support the hypothesis that function words play a special role in word learning.", "pdf_parse": { "paper_id": "P14-1027", "_pdf_hash": "", "abstract": [ { "text": "Inspired by experimental psychological findings suggesting that function words play a special role in word learning, we make a simple modification to an Adaptor Grammar based Bayesian word segmentation model to allow it to learn sequences of monosyllabic \"function words\" at the beginnings and endings of collocations of (possibly multi-syllabic) words. This modification improves unsupervised word segmentation on the standard Bernstein-Ratner (1987) corpus of child-directed English by more than 4% token f-score compared to a model identical except that it does not special-case \"function words\", setting a new state-of-the-art of 92.4% token f-score. Our function word model assumes that function words appear at the left periphery, and while this is true of languages such as English, it is not true universally. We show that a learner can use Bayesian model selection to determine the location of function words in their language, even though the input to the model only consists of unsegmented sequences of phones. Thus our computational models support the hypothesis that function words play a special role in word learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Over the past two decades psychologists have investigated the role that function words might play in human language acquisition. Their experiments suggest that function words play a special role in the acquisition process: children learn function words before they learn the vast bulk of the associated content words, and they use function words to help identify context words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of this paper is to determine whether computational models of human language acquisition can provide support for the hypothesis that function words are treated specially in human language acquisition. We do this by comparing two computational models of word segmentation which differ solely in the way that they model function words. Following Elman et al. (1996) and Brent (1999) our word segmentation models identify word boundaries from unsegmented sequences of phonemes corresponding to utterances, effectively performing unsupervised learning of a lexicon. For example, given input consisting of unsegmented utterances such as the following: We show that a model equipped with the ability to learn some rudimentary properties of the target language's function words is able to learn the vocabulary of that language more accurately than a model that is identical except that it is incapable of learning these generalisations about function words. This suggests that there are acquisition advantages to treating function words specially that human learners could take advantage of (at least to the extent that they are learning similar generalisations as our models), and thus supports the hypothesis that function words are treated specially in human lexical acquisition. As a reviewer points out, we present no evidence that children use function words in the way that our model does, and we want to emphasise we make no such claim. While absolute accuracy is not directly relevant to the main point of the paper, we note that the models that learn generalisations about function words perform unsupervised word segmentation at 92.5% token f-score on the standard Bernstein-Ratner (1987) corpus, which improves the previous state-of-the-art by more than 4%.", "cite_spans": [ { "start": 353, "end": 372, "text": "Elman et al. (1996)", "ref_id": "BIBREF6" }, { "start": 377, "end": 389, "text": "Brent (1999)", "ref_id": "BIBREF1" }, { "start": 1678, "end": 1701, "text": "Bernstein-Ratner (1987)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As a reviewer points out, the changes we make to our models to incorporate function words can be viewed as \"building in\" substantive information about possible human languages. The model that achieves the best token f-score expects function words to appear at the left edge of phrases. While this is true for languages such as English, it is not true universally. By comparing the posterior probability of two models -one in which function words appear at the left edges of phrases, and another in which function words appear at the right edges of phrases -we show that a learner could use Bayesian posterior probabilities to determine that function words appear at the left edges of phrases in English, even though they are not told the locations of word boundaries or which words are function words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is structured as follows. Section 2 describes the specific word segmentation models studied in this paper, and the way we extended them to capture certain properties of function words. The word segmentation experiments are presented in section 3, and section 4 discusses how a learner could determine whether function words occur on the left-periphery or the rightperiphery in the language they are learning. Section 5 concludes and describes possible future work. The rest of this introduction provides background on function words, the Adaptor Grammar models we use to describe lexical acquisition and the Bayesian inference procedures we use to infer these models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditional descriptive linguistics distinguishes function words, such as determiners and prepositions, from content words, such as nouns and verbs, corresponding roughly to the distinction between functional categories and lexical categories of modern generative linguistics (Fromkin, 2001) . Function words differ from content words in at least the following ways:", "cite_spans": [ { "start": 276, "end": 291, "text": "(Fromkin, 2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Psychological evidence for the role of function words in word learning", "sec_num": "1.1" }, { "text": "1. there are usually far fewer function word types than content word types in a language 2. function word types typically have much higher token frequency than content word types 3. function words are typically morphologically and phonologically simple (e.g., they are typically monosyllabic) 4. function words typically appear in peripheral positions of phrases (e.g., prepositions typically appear at the beginning of prepositional phrases) 5. each function word class is associated with specific content word classes (e.g., deter-miners and prepositions are associated with nouns, auxiliary verbs and complementisers are associated with main verbs) 6. semantically, content words denote sets of objects or events, while function words denote more complex relationships over the entities denoted by content words 7. historically, the rate of innovation of function words is much lower than the rate of innovation of content words (i.e., function words are typically \"closed class\", while content words are \"open class\") Properties 1-4 suggest that function words might play a special role in language acquisition because they are especially easy to identify, while property 5 suggests that they might be useful for identifying lexical categories. The models we study here focus on properties 3 and 4, in that they are capable of learning specific sequences of monosyllabic words in peripheral (i.e., initial or final) positions of phrase-like units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Psychological evidence for the role of function words in word learning", "sec_num": "1.1" }, { "text": "A number of psychological experiments have shown that infants are sensitive to the function words of their language within their first year of life (Shi et al., 2006; Hall\u00e9 et al., 2008; Shafer et al., 1998) , often before they have experienced the \"word learning spurt\". Crucially for our purpose, infants of this age were shown to exploit frequent function words to segment neighboring content words (Shi and Lepage, 2008; Hall\u00e9 et al., 2008) .", "cite_spans": [ { "start": 148, "end": 166, "text": "(Shi et al., 2006;", "ref_id": "BIBREF27" }, { "start": 167, "end": 186, "text": "Hall\u00e9 et al., 2008;", "ref_id": "BIBREF11" }, { "start": 187, "end": 207, "text": "Shafer et al., 1998)", "ref_id": "BIBREF23" }, { "start": 402, "end": 424, "text": "(Shi and Lepage, 2008;", "ref_id": "BIBREF24" }, { "start": 425, "end": 444, "text": "Hall\u00e9 et al., 2008)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Psychological evidence for the role of function words in word learning", "sec_num": "1.1" }, { "text": "In addition, 14 to 18-month-old children were shown to exploit function words to constrain lexical access to known words -for instance, they expect a noun after a determiner (Cauvet et al., 2014; Kedar et al., 2006; Zangl and Fernald, 2007) . In addition, it is plausible that function words play a crucial role in children's acquisition of more complex syntactic phenomena (Christophe et al., 2008; Demuth and McCullough, 2009) , so it is interesting to investigate the roles they might play in computational models of language acquisition.", "cite_spans": [ { "start": 174, "end": 195, "text": "(Cauvet et al., 2014;", "ref_id": "BIBREF2" }, { "start": 196, "end": 215, "text": "Kedar et al., 2006;", "ref_id": "BIBREF19" }, { "start": 216, "end": 240, "text": "Zangl and Fernald, 2007)", "ref_id": "BIBREF29" }, { "start": 374, "end": 399, "text": "(Christophe et al., 2008;", "ref_id": "BIBREF3" }, { "start": 400, "end": 428, "text": "Demuth and McCullough, 2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Psychological evidence for the role of function words in word learning", "sec_num": "1.1" }, { "text": "Adaptor grammars are a framework for Bayesian inference of a certain class of hierarchical nonparametric models (Johnson et al., 2007b) . They define distributions over the trees specified by a context-free grammar, but unlike probabilistic context-free grammars, they \"learn\" distributions over the possible subtrees of a user-specified set of \"adapted\" nonterminals. (Adaptor grammars are non-parametric, i.e., not characterisable by a finite set of parameters, if the set of possible subtrees of the adapted nonterminals is infinite). Adaptor grammars are useful when the goal is to learn a potentially unbounded set of entities that need to satisfy hierarchical constraints. As section 2 explains in more detail, word segmentation is such a case: words are composed of syllables and belong to phrases or collocations, and modelling this structure improves word segmentation accuracy.", "cite_spans": [ { "start": 112, "end": 135, "text": "(Johnson et al., 2007b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "Adaptor Grammars are formally defined in Johnson et al. (2007b) , which should be consulted for technical details. Adaptor Grammars (AGs) are an extension of Probabilistic Context-Free Grammars (PCFGs), which we describe first. A Context-Free Grammar (CFG) G = (N, W, R, S) consists of disjoint finite sets of nonterminal symbols N and terminal symbols W , a finite set of rules R of the form A \u2192 \u03b1 where A \u2208 N and \u03b1 \u2208 (N \u222a W ) \u22c6 , and a start symbol S \u2208 N . (We assume there are no \"\u03f5-rules\" in R, i.e., we require that |\u03b1| \u2265 1 for each A \u2192 \u03b1 \u2208 R).", "cite_spans": [ { "start": 41, "end": 63, "text": "Johnson et al. (2007b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "A Probabilistic Context-Free Grammar (PCFG) is a quintuple (N, W, R, S, \u03b8) where N , W , R and S are the nonterminals, terminals, rules and start symbol of a CFG respectively, and \u03b8 is a vector of non-negative reals indexed by R that satisfy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "\u2211 \u03b1\u2208R A \u03b8 A \u2192 \u03b1 = 1 for each A \u2208 N , where R A = {A \u2192 \u03b1 : A \u2192 \u03b1 \u2208 R} is the set of rules expanding A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "Informally, \u03b8 A \u2192 \u03b1 is the probability of a node labelled A expanding to a sequence of nodes labelled \u03b1, and the probability of a tree is the product of the probabilities of the rules used to construct each non-leaf node in it. More precisely, for each X \u2208 N \u222a W a PCFG associates distributions G X over the set of trees T X generated by X as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "If X \u2208 W (i.e., if X is a terminal) then G X is the distribution that puts probability 1 on the single-node tree labelled X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "If X \u2208 N (i.e., if X is a nonterminal) then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "G X = \u2211 X \u2192 B 1 ...Bn\u2208R X \u03b8 X \u2192 B 1 ...Bn TD X (G B 1 , . . . , G Bn ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "where R X is the subset of rules in R expanding nonterminal X \u2208 N , and:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "TD X (G 1 , . . . , G n ) ( . . X . t 1 . t n . . . . ) = n \u220f i=1 G i (t i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "That is, TD X (G 1 , . . . , G n ) is a distribution over the set of trees T X generated by nonterminal X, where each subtree t i is generated independently from G i . The PCFG generates the distribution G S over the set of trees T S generated by the start symbol S; the distribution over the strings it generates is obtained by marginalising over the trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "In a Bayesian PCFG one puts Dirichlet priors Dir(\u03b1) on the rule probability vector \u03b8, such that there is one Dirichlet parameter \u03b1 A \u2192 \u03b1 for each rule A \u2192 \u03b1 \u2208 R. There are Markov Chain Monte Carlo (MCMC) and Variational Bayes procedures for estimating the posterior distribution over rule probabilities \u03b8 and parse trees given data consisting of terminal strings alone (Kurihara and Sato, 2006; Johnson et al., 2007a) .", "cite_spans": [ { "start": 369, "end": 394, "text": "(Kurihara and Sato, 2006;", "ref_id": "BIBREF20" }, { "start": 395, "end": 417, "text": "Johnson et al., 2007a)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "PCFGs can be viewed as recursive mixture models over trees. While PCFGs are expressive enough to describe a range of linguisticallyinteresting phenomena, PCFGs are parametric models, which limits their ability to describe phenomena where the set of basic units, as well as their properties, are the target of learning. Lexical acqusition is an example of a phenomenon that is naturally viewed as non-parametric inference, where the number of lexical entries (i.e., words) as well as their properties must be learnt from the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "It turns out there is a straight-forward modification to the PCFG distribution (1) that makes it suitably non-parametric. As Johnson et al. (2007b) explain, by inserting a Dirichlet Process (DP) or Pitman-Yor Process (PYP) into the generative mechanism (1) the model \"concentrates\" mass on a subset of trees (Teh et al., 2006) . Specifically, an Adaptor Grammar identifies a subset A \u2286 N of adapted nonterminals. In an Adaptor Grammar the unadapted nonterminals N \\ A expand via (1), just as in a PCFG, but the distributions of the adapted nonterminals A are \"concentrated\" by passing them through a DP or PYP:", "cite_spans": [ { "start": 125, "end": 147, "text": "Johnson et al. (2007b)", "ref_id": "BIBREF15" }, { "start": 308, "end": 326, "text": "(Teh et al., 2006)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "H X = \u2211 X \u2192 B 1 ...Bn\u2208R X \u03b8 X \u2192 B 1 ...Bn TD X (G B 1 , . . . , G Bn ) G X = PYP(H X , a X , b X )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "Here a X and b X are parameters of the PYP associated with the adapted nonterminal X. As Goldwater et al. (2011) explain, such Pitman-Yor Processes naturally generate power-law distributed data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "Informally, Adaptor Grammars can be viewed as caching entire subtrees of the adapted nonterminals. Roughly speaking, the probability of generating a particular subtree of an adapted nonterminal is proportional to the number of times that subtree has been generated before. This \"rich get richer\" behaviour causes the distribution of subtrees to follow a power-law (the power is specified by the a X parameter of the PYP). The PCFG rules expanding an adapted nonterminal X define the \"base distribution\" of the associated DP or PYP, and the a X and b X parameters determine how much mass is reserved for \"new\" trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "There are several different procedures for inferring the parse trees and the rule probabilities given a corpus of strings: Johnson et al. (2007b) describe a MCMC sampler and Cohen et al. 2010describe a Variational Bayes procedure. We use the MCMC procedure here since this has been successfully applied to word segmentation problems in previous work (Johnson, 2008) .", "cite_spans": [ { "start": 123, "end": 145, "text": "Johnson et al. (2007b)", "ref_id": "BIBREF15" }, { "start": 350, "end": 365, "text": "(Johnson, 2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptor grammars", "sec_num": "1.2" }, { "text": "Perhaps the simplest word segmentation model is the unigram model, where utterances are modeled as sequences of words, and where each word is a sequence of segments (Brent, 1999; . A unigram model can be expressed as an Adaptor Grammar with one adapted nonterminal Word (we indicate adapted nonterminals by underlining them in grammars here; regular expressions are expanded into right-branching productions).", "cite_spans": [ { "start": 165, "end": 178, "text": "(Brent, 1999;", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Word segmentation with Adaptor Grammars", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Sentence \u2192 Word + (2) Word \u2192 Phone +", "eq_num": "(3)" } ], "section": "Word segmentation with Adaptor Grammars", "sec_num": "2" }, { "text": "The first rule (2) says that a sentence consists of one or more Words, while the second rule (3) states that a Word consists of a sequence of one or more Phones; we assume that there are rules expanding Phone into all possible phones. Because Word is an adapted nonterminal, the adaptor grammar memoises Word subtrees, which corresponds to learning the phone sequences for the words of the language. The more sophisticated Adaptor Grammars discussed below can be understood as specialising either the first or the second of the rules in (2-3). The next two subsections review the Adaptor Grammar word segmentation models presented in Johnson (2008) and : section 2.1 reviews how phonotactic syllable-structure constraints can be expressed with Adaptor Grammars, while section 2.2 reviews how phrase-like units called \"collocations\" capture inter-word dependencies. Section 2.3 presents the major novel contribution of this paper by explaining how we modify these adaptor grammars to capture some of the special properties of function words.", "cite_spans": [ { "start": 634, "end": 648, "text": "Johnson (2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Word segmentation with Adaptor Grammars", "sec_num": "2" }, { "text": "The rule (3) models words as sequences of independently generated phones: this is what called the \"monkey model\" of word generation (it instantiates the metaphor that word types are generated by a monkey randomly banging on the keys of a typewriter). However, the words of a language are typically composed of one or more syllables, and explicitly modelling the internal structure of words typically improves word segmentation considerably. Johnson (2008) suggested replacing (3) with the following model of word structure:", "cite_spans": [ { "start": 441, "end": 455, "text": "Johnson (2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Syllable structure and phonotactics", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Word \u2192 Syllable 1:4 (4) Syllable \u2192(Onset) Rhyme (5) Onset \u2192 Consonant + (6) Rhyme \u2192 Nucleus (Coda)", "eq_num": "(7)" } ], "section": "Syllable structure and phonotactics", "sec_num": "2.1" }, { "text": "Nucleus \u2192 Vowel", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syllable structure and phonotactics", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "+ (8) Coda \u2192 Consonant +", "eq_num": "(9)" } ], "section": "Syllable structure and phonotactics", "sec_num": "2.1" }, { "text": "Here and below superscripts indicate iteration (e.g., a Word consists of 1 to 4 Syllables), while an Onset consists of an unbounded number of Consonants), while parentheses indicate optionality (e.g., a Rhyme consists of an obligatory Nucleus followed by an optional Coda). We assume that there are rules expanding Consonant and Vowel to the set of all consonants and vowels respectively (this amounts to assuming that the learner can distinguish consonants from vowels). Because Onset, Nucleus and Coda are adapted, this model learns the possible syllable onsets, nucleii and coda of the language, even though neither syllable structure nor word boundaries are explicitly indicated in the input to the model. The model just described assumes that wordinternal syllables have the same structure as wordperipheral syllables, but in languages such as English word-peripheral onsets and codas can be more complex than the corresponding wordinternal onsets and codas. For example, the word \"string\" begins with the onset cluster str, which is relatively rare word-internally. Johnson (2008) showed that word segmentation accuracy improves if the model can learn different consonant sequences for word-inital onsets and wordfinal codas. It is easy to express this as an Adaptor Grammar: (4) is replaced with (10-11) and (12-17) are added to the grammar.", "cite_spans": [ { "start": 1072, "end": 1086, "text": "Johnson (2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Syllable structure and phonotactics", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Word \u2192 SyllableIF (10) Word \u2192 SyllableI Syllable 0:2 SyllableF (11) SyllableIF \u2192(OnsetI) RhymeF (12) SyllableI \u2192(OnsetI) Rhyme (13) SyllableF \u2192(Onset) RhymeF (14) OnsetI \u2192 Consonant + (15) RhymeF \u2192 Nucleus (CodaF) (16) CodaF \u2192 Consonant +", "eq_num": "(17)" } ], "section": "Syllable structure and phonotactics", "sec_num": "2.1" }, { "text": "In this grammar the suffix \"I\" indicates a wordinitial element, and \"F\" indicates a word-final element. Note that the model simply has the ability to learn that different clusters can occur wordperipherally and word-internally; it is not given any information about the relative complexity of these clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syllable structure and phonotactics", "sec_num": "2.1" }, { "text": "Goldwater et al. 2009point out the detrimental effect that inter-word dependencies can have on word segmentation models that assume that the words of an utterance are independently generated. Informally, a model that generates words independently is likely to incorrectly segment multiword expressions such as \"the doggie\" as single words because the model has no way to capture word-to-word dependencies, e.g., that \"doggie\" is typically preceded by \"the\". Goldwater et al show that word segmentation accuracy improves when the model is extended to capture bigram dependencies. Adaptor grammar models cannot express bigram dependencies, but they can capture similiar inter-word dependencies using phrase-like units that Johnson (2008) calls collocations. Johnson and showed that word segmentation accuracy improves further if the model learns a nested hierarchy of collocations. This can be achieved by replacing (2) with (18-21).", "cite_spans": [ { "start": 721, "end": 735, "text": "Johnson (2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Collocation models of inter-word dependencies", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Sentence \u2192 Colloc3 + (18) Colloc3 \u2192 Colloc2 + (19) Colloc2 \u2192 Colloc1 + (20) Colloc1 \u2192 Word +", "eq_num": "(21)" } ], "section": "Collocation models of inter-word dependencies", "sec_num": "2.2" }, { "text": "Informally, Colloc1, Colloc2 and Colloc3 define a nested hierarchy of phrase-like units. While not designed to correspond to syntactic phrases, by examining the sample parses induced by the Adaptor Grammar we noticed that the collocations often correspond to noun phrases, prepositional phrases or verb phrases. This motivates the extension to the Adaptor Grammar discussed below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Collocation models of inter-word dependencies", "sec_num": "2.2" }, { "text": "The starting point and baseline for our extension is the adaptor grammar with syllable structure phonotactic constraints and three levels of collocational structure (5-21), as prior work has found that this yields the highest word segmentation token f-score . Our extension assumes that the Colloc1 \u2212 Colloc3 constituents are in fact phrase-like, so we extend the rules (19-21) to permit an optional sequence of monosyllabic words at the left edge of each of these constituents. Our model thus captures two of the properties of function words discussed in section 1.1: they are monosyllabic (and thus phonologically simple), and they appear on the periphery of phrases. (We put \"function words\" in scare quotes below because our model only approximately captures the linguistic properties of function words).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating \"function words\" into collocation models", "sec_num": "2.3" }, { "text": "Specifically, we replace rules (19-21) with the following sequence of rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating \"function words\" into collocation models", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Colloc3 \u2192(FuncWords3) Colloc2 + (22) Colloc2 \u2192(FuncWords2) Colloc1 + (23) Colloc1 \u2192(FuncWords1) Word + (24) FuncWords3 \u2192 FuncWord3 + (25) FuncWord3 \u2192 SyllableIF (26) FuncWords2 \u2192 FuncWord2 + (27) FuncWord2 \u2192 SyllableIF (28) FuncWords1 \u2192 FuncWord1 + (29) FuncWord1 \u2192 SyllableIF", "eq_num": "(30)" } ], "section": "Incorporating \"function words\" into collocation models", "sec_num": "2.3" }, { "text": "This model memoises (i.e., learns) both the individual \"function words\" and the sequences of \"function words\" that modify the Colloc1 \u2212 Colloc3 constituents. Note also that \"function words\" expand directly to SyllableIF, which in turn expands to a monosyllable with a word-initial onset and word-final coda. This means that \"function words\" are memoised independently of the \"content words\" that Word expands to; i.e., the model learns distinct \"function word\" and \"content word\" vocabularies. Figure 1 depicts a sample parse generated by this grammar.", "cite_spans": [], "ref_spans": [ { "start": 494, "end": 502, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Incorporating \"function words\" into collocation models", "sec_num": "2.3" }, { "text": ". .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incorporating \"function words\" into collocation models", "sec_num": "2.3" }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Colloc3", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FuncWords3", "sec_num": null }, { "text": ". you .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FuncWord3", "sec_num": null }, { "text": ". want .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FuncWord3", "sec_num": null }, { "text": ". to .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FuncWord3", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Colloc2", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Colloc1", "sec_num": null }, { "text": ". see .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Colloc1", "sec_num": null }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FuncWords1", "sec_num": null }, { "text": ". the .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "FuncWord1", "sec_num": null }, { "text": ". book Figure 1 : A sample parse generated by the \"function word\" Adaptor Grammar with rules (10-18) and (22) (23) (24) (25) (26) (27) (28) (29) (30) . To simplify the parse we only show the root node and the adapted nonterminals, and replace word-internal structure by the word's orthographic form.", "cite_spans": [ { "start": 105, "end": 109, "text": "(22)", "ref_id": null }, { "start": 110, "end": 114, "text": "(23)", "ref_id": null }, { "start": 115, "end": 119, "text": "(24)", "ref_id": null }, { "start": 120, "end": 124, "text": "(25)", "ref_id": null }, { "start": 125, "end": 129, "text": "(26)", "ref_id": null }, { "start": 130, "end": 134, "text": "(27)", "ref_id": null }, { "start": 135, "end": 139, "text": "(28)", "ref_id": null }, { "start": 140, "end": 144, "text": "(29)", "ref_id": null }, { "start": 145, "end": 149, "text": "(30)", "ref_id": null } ], "ref_spans": [ { "start": 7, "end": 15, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Word", "sec_num": null }, { "text": "This grammar builds in the fact that function words appear on the left periphery of phrases. This is true of languages such as English, but is not true cross-linguistically. For comparison purposes we also include results for a mirror-image model that permits \"function words\" on the right periphery, a model which permits \"function words\" on both the left and right periphery (achieved by changing rules 22-24), as well as a model that analyses all words as monosyllabic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word", "sec_num": null }, { "text": "Section 4 explains how a learner could use Bayesian model selection to determine that function words appear on the left periphery in English by comparing the posterior probability of the data under our \"function word\" Adaptor Grammar to that obtained using a grammar which is identical except that rules (22-24) are replaced with the mirror-image rules in which \"function words\" are attached to the right periphery.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word", "sec_num": null }, { "text": "This section presents results of running our Adaptor Grammar models on subsets of the Bernstein-Ratner (1987) corpus of child-directed English. We use the Adaptor Grammar software available from http://web.science.mq.edu.au/\u02dcmjohnson/ with the same settings as described in , i.e., we perform Bayesian inference with \"vague\" priors for all hyperparameters (so there are no adjustable parameters in our models), and perform 8 different MCMC runs of each condition with table-label resampling for 2,000 sweeps of the training data. At every 10th sweep of the last 1,000 sweeps we use the model to segment the entire corpus (even if it is only trained on a subset of it), so we collect 800 sample segmentations of each utterance. The most frequent segmentation in these 800 sample segmentations is the one we score in the evaluations below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word segmentation results", "sec_num": "3" }, { "text": "Here we evaluate the word segmentations found by the \"function word\" Adaptor Grammar model described in section 2.3 and compare it to the baseline grammar with collocations and phonotactics from . Figure 2 presents the standard token and lexicon (i.e., type) f-score evaluations for word segmentations proposed by these models (Brent, 1999) , and Table 1 summarises the token and lexicon f-scores for the major models discussed in this paper. It is interesting to note that adding \"function words\" improves token f-score by more than 4%, corresponding to a 40% reduction in overall error rate. When the training data is very small the Monosyllabic grammar produces the highest accuracy results, presumably because a large proportion of the words in child-directed speech are monosyllabic. However, at around 25 sentences the more complex models that are capable of finding multisyllabic words start to become more accurate.", "cite_spans": [ { "start": 327, "end": 340, "text": "(Brent, 1999)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 197, "end": 205, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 347, "end": 354, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Word segmentation with \"function word\" models", "sec_num": "3.1" }, { "text": "It's interesting that after about 1,000 sentences the model that allows \"function words\" only on the right periphery is considerably less accurate than the baseline model. Presumably this is because it tends to misanalyse multi-syllabic words on the right periphery as sequences of monosyllabic words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word segmentation with \"function word\" models", "sec_num": "3.1" }, { "text": "The model that allows \"function words\" only on the left periphery is more accurate than the model that allows them on both the left and right periphery when the input data ranges from about 100 to about 1,000 sentences, but when the training data 1987corpus as a function of training data size for the baseline model, the model where \"function words\" can appear on the left periphery, a model where \"function words\" can appear on the right periphery, and a model where \"function words\" can appear on both the left and the right periphery. For comparison purposes we also include results for a model that assumes that all words are monosyllabic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word segmentation with \"function word\" models", "sec_num": "3.1" }, { "text": "is larger than about 1,000 sentences both models are equally accurate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word segmentation with \"function word\" models", "sec_num": "3.1" }, { "text": "As noted earlier, the \"function word\" model generates function words via adapted nonterminals other than the Word category. In order to better understand just how the model works, we give the 5 most frequent words in each word category found during 8 MCMC runs of the left-peripheral \"function word\" grammar above: Thus, the present model, initially aimed at segmenting words from continuous speech, shows three interesting characteristics that are also exhibited by human infants: it distinguishes between function words and content words (Shi and Werker, 2001) , it allows learners to acquire at least some of the function words of their language (e.g. (Shi et al., 2006) ); and furthermore, it may also allow them to start grouping together function words according to their category (Cauvet et al., 2014; Shi and Melan\u00e7on, 2010) .", "cite_spans": [ { "start": 540, "end": 562, "text": "(Shi and Werker, 2001)", "ref_id": "BIBREF26" }, { "start": 655, "end": 673, "text": "(Shi et al., 2006)", "ref_id": "BIBREF27" }, { "start": 787, "end": 808, "text": "(Cauvet et al., 2014;", "ref_id": "BIBREF2" }, { "start": 809, "end": 832, "text": "Shi and Melan\u00e7on, 2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "Word : book,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "4 Are \"function words\" on the left or right periphery?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "We have shown that a model that expects function words on the left periphery performs more accurate word segmentation on English, where function words do indeed typically occur on the left periphery, leaving open the question: how could a learner determine whether function words generally appear on the left or the right periphery of phrases in the language they are learning? This question is important because knowing the side where function words preferentially occur is re-lated to the question of the direction of syntactic headedness in the language, and an accurate method for identifying the location of function words might be useful for initialising a syntactic learner. Experimental evidence suggests that infants as young as 8 months of age already expect function words on the correct side for their language -left-periphery for Italian infants and right-periphery for Japanese infants (Gervain et al., 2008) -so it is interesting to see whether purely distributional learners such as the ones studied here can identify the correct location of function words in phrases. We experimented with a variety of approaches that use a single adaptor grammar inference process, but none of these were successful. For example, we hoped that given an Adaptor Grammar that permits \"function words\" on both the left and right periphery, the inference procedure would decide that the right-periphery rules simply are not used in a language like English. Unfortunately we did not find this in our experiments; the right-periphery rules were used almost as often as the left-periphery rules (recall that a large fraction of the words in English child-directed speech are monosyllabic).", "cite_spans": [ { "start": 900, "end": 922, "text": "(Gervain et al., 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "In this section, we show that learners could use Bayesian model selection to determine that function words appear on the left periphery in English by comparing the marginal probability of the data for the left-periphery and the right-periphery models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "Instead, we used Bayesian model selection techniques to determine whether left-peripheral or a right-peripheral model better fits the unsegmented utterances that constitute the training data. 2 While Bayesian model selection is in principle straight-forward, it turns out to require the ratio of two integrals (for the \"evidence\" or marginal likelihood) that are often intractable to compute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "Specifically, given a training corpus D of unsegmented sentences and model families G 1 and G 2 (here the \"function word\" adaptor grammars with left-peripheral and right-peripheral attachment respectively), the Bayes factor K is the ratio of the marginal likelihoods of the data: where the marginal likelihood or \"evidence\" for a model G is obtained by integrating over all of the hidden or latent structure and parameters \u03b8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "K = P(D | G 1 ) P(D | G 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "P(D | G) = \u222b \u2206 P(D, \u03b8 | G) d\u03b8 (31)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "Here the variable \u03b8 ranges over the space \u2206 of all possible parses for the utterances in D and all possible configurations of the Pitman-Yor processes and their parameters that constitute the \"state\" of the Adaptor Grammar G. While the probability of any specific Adaptor Grammar configuration \u03b8 is not too hard to calculate (the MCMC sampler for Adaptor Grammars can print this after each sweep through D), the integral in (31) is in general intractable. Textbooks such as Murphy (2012) describe a number of methods for calculating P(D | G), but most of them assume that the parameter space \u2206 is continuous and so cannot be directly applied here. The Harmonic Mean estimator (32) for (31), which we used here, is a popular estimator for (31) because it only requires the ability to calculate P(D, \u03b8 | G) for samples from P(\u03b8 | D, G):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "P(D | G) \u2248 ( 1 n n \u2211 i=1 1 P(D, \u03b8 i | G) ) \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "where \u03b8 i , . . . , \u03b8 n are n samples from P(\u03b8 | D, G), which can be generated by the MCMC procedure. Figure 3 depicts how the Bayes factor in favour of left-peripheral attachment of \"function words\" varies as a function of the number of utterances in the training data D (calculated from the last 1000 sweeps of 8 MCMC runs of the corresponding adaptor grammars). As that figure shows, once the training data contains more than about 1,000 sentences the evidence for the leftperipheral grammar becomes very strong. On the full training data the estimated log Bayes factor is over 6,000, which would constitute overwhelming evidence in favour of left-peripheral attachment.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "Unfortunately, as Murphy and others warn, the Harmonic Mean estimator is extremely unstable (Radford Neal calls it \"the worst MCMC method ever\" in his blog), so we think it is important to confirm these results using a more stable estimator. However, given the magnitude of the differences and the fact that the two models being compared are of similar complexity, we believe that these results suggest that Bayesian model selection can be used to determine properties of the language being learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Content and function words found by \"function word\" model", "sec_num": "3.2" }, { "text": "This paper showed that the word segmentation accuracy of a state-of-the-art Adaptor Grammar model is significantly improved by extending it so that it explicitly models some properties of function words. We also showed how Bayesian model selection can be used to identify that function words appear on the left periphery of phrases in English, even though the input to the model only consists of an unsegmented sequence of phones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "Of course this work only scratches the surface in terms of investigating the role of function words in language acquisition. It would clearly be very interesting to examine the performance of these models on other corpora of child-directed English, as well as on corpora of child-directed speech in other languages. Our evaluation focused on wordsegmentation, but we could also evaluate the effect that modelling \"function words\" has on other aspects of the model, such as its ability to learn syllable structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "The models of \"function words\" we investigated here only capture two of the 7 linguistic properties of function words identified in section 1 (i.e., that function words tend to be monosyllabic, and that they tend to appear phrase-peripherally), so it would be interesting to develop and explore models that capture other linguistic properties of function words. For example, following the suggestion by Hochmann et al. (2010) that human learners use frequency cues to identify function words, it might be interesting to develop computational models that do the same thing. In an Adaptor Grammar the frequency distribution of function words might be modelled by specifying the prior for the Pitman-Yor Process parameters associated with the function words' adapted nonterminals so that it prefers to generate a small number of high-frequency items.", "cite_spans": [ { "start": 403, "end": 425, "text": "Hochmann et al. (2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "It should also be possible to develop models which capture the fact that function words tend not to be topic-specific. Johnson et al. (2010) and Johnson et al. (2012) show how Adaptor Grammars can model the association between words and non-linguistic \"topics\"; perhaps these models could be extended to capture some of the semantic properties of function words.", "cite_spans": [ { "start": 119, "end": 140, "text": "Johnson et al. (2010)", "ref_id": "BIBREF16" }, { "start": 145, "end": 166, "text": "Johnson et al. (2012)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "It would also be interesting to further explore the extent to which Bayesian model selection is a useful approach to linguistic \"parameter setting\". In order to do this it is imperative to develop better methods than the problematic \"Harmonic Mean\" estimator used here for calculating the evidence (i.e., the marginal probability of the data) that can handle the combination of discrete and continuous hidden structure that occur in computational linguistic models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "As well as substantially improving the accuracy of unsupervised word segmentation, this work is interesting because it suggests a connection between unsupervised word segmentation and the induction of syntactic structure. It is reasonable to expect that hierarchical non-parametric Bayesian models such as Adaptor Grammars may be useful tools for exploring such a connection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "5" }, { "text": "The phone 'l' is generated by both Consonant and Vowel, so \"little\" can be (incorrectly) analysed as one syllable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that neither the left-peripheral nor the rightperipheral model is correct: even strongly left-headed languages like English typically contain a few right-headed constructions. For example, \"ago\" is arguably the head of the phrase \"ten years ago\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by the Australian Research Council's Discovery Projects funding scheme (project numbers DP110102506 and DP110102593), the European Research Council (ERC-2011-AdG-295810 BOOTPHON), the Agence Nationale pour la Recherche (ANR-10-LABX-0087 IEC, and ANR-10-IDEX-0001-02 PSL*), and the Mairie de Paris, Ecole des Hautes Etudes en Sciences Sociales, the Ecole Normale Sup\u00e9rieure, and the Fondation Pierre Gilles de Gennes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The phonology of parentchild speech", "authors": [ { "first": "N", "middle": [], "last": "Bernstein-Ratner", "suffix": "" } ], "year": 1987, "venue": "Children's Language", "volume": "6", "issue": "", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Bernstein-Ratner. 1987. The phonology of parent- child speech. In K. Nelson and A. van Kleeck, ed- itors, Children's Language, volume 6, pages 159- 174. Erlbaum, Hillsdale, NJ.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An efficient, probabilistically sound algorithm for segmentation and word discovery", "authors": [ { "first": "M", "middle": [], "last": "Brent", "suffix": "" } ], "year": 1999, "venue": "Machine Learning", "volume": "34", "issue": "", "pages": "71--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71-105.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Function words constrain on-line recognition of verbs and nouns in French 18-month-olds", "authors": [ { "first": "E", "middle": [], "last": "Cauvet", "suffix": "" }, { "first": "R", "middle": [], "last": "Limissuri", "suffix": "" }, { "first": "S", "middle": [], "last": "Millotte", "suffix": "" }, { "first": "K", "middle": [], "last": "Skoruppa", "suffix": "" }, { "first": "D", "middle": [], "last": "Cabrol", "suffix": "" }, { "first": "A", "middle": [], "last": "Christophe", "suffix": "" } ], "year": 2014, "venue": "Language Learning and Development", "volume": "", "issue": "", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Cauvet, R. Limissuri, S. Millotte, K. Skoruppa, D. Cabrol, and A. Christophe. 2014. Function words constrain on-line recognition of verbs and nouns in French 18-month-olds. Language Learn- ing and Development, pages 1-18.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bootstrapping lexical and syntactic acquisition", "authors": [ { "first": "A", "middle": [], "last": "Christophe", "suffix": "" }, { "first": "S", "middle": [], "last": "Millotte", "suffix": "" }, { "first": "S", "middle": [], "last": "Bernal", "suffix": "" }, { "first": "J", "middle": [], "last": "Lidz", "suffix": "" } ], "year": 2008, "venue": "Language and Speech", "volume": "51", "issue": "1-2", "pages": "61--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Christophe, S. Millotte, S. Bernal, and J. Lidz. 2008. Bootstrapping lexical and syntactic acquisi- tion. Language and Speech, 51(1-2):61-75.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Variational inference for adaptor grammars", "authors": [ { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "564--572", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. B. Cohen, D. M. Blei, and N. A. Smith. 2010. Vari- ational inference for adaptor grammars. In Human Language Technologies: The 2010 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 564-572, Los Angeles, California, June. Association for Com- putational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The prosodic (re-)organization of childrens early English articles", "authors": [ { "first": "K", "middle": [], "last": "Demuth", "suffix": "" }, { "first": "E", "middle": [], "last": "Mccullough", "suffix": "" } ], "year": 2009, "venue": "Journal of Child Language", "volume": "36", "issue": "1", "pages": "173--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Demuth and E. McCullough. 2009. The prosodic (re-)organization of childrens early English articles. Journal of Child Language, 36(1):173-200.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Rethinking Innateness: A Connectionist Perspective on Development", "authors": [ { "first": "J", "middle": [], "last": "Elman", "suffix": "" }, { "first": "E", "middle": [], "last": "Bates", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Johnson", "suffix": "" }, { "first": "A", "middle": [], "last": "Karmiloff-Smith", "suffix": "" }, { "first": "D", "middle": [], "last": "Parisi", "suffix": "" }, { "first": "K", "middle": [], "last": "Plunkett", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Elman, E. Bates, M. H. Johnson, A. Karmiloff- Smith, D. Parisi, and K. Plunkett. 1996. Rethink- ing Innateness: A Connectionist Perspective on De- velopment. MIT Press/Bradford Books, Cambridge, MA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Linguistics: An Introduction to Linguistic Theory", "authors": [], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Fromkin, editor. 2001. Linguistics: An Introduction to Linguistic Theory. Blackwell, Oxford, UK.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bootstrapping word order in prelexical infants: A japaneseitalian cross-linguistic study", "authors": [ { "first": "J", "middle": [], "last": "Gervain", "suffix": "" }, { "first": "M", "middle": [], "last": "Nespor", "suffix": "" }, { "first": "R", "middle": [], "last": "Mazuka", "suffix": "" }, { "first": "R", "middle": [], "last": "Horie", "suffix": "" }, { "first": "J", "middle": [], "last": "Mehler", "suffix": "" } ], "year": 2008, "venue": "Cognitive Psychology", "volume": "57", "issue": "1", "pages": "56--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Gervain, M. Nespor, R. Mazuka, R. Horie, and J. Mehler. 2008. Bootstrapping word order in prelexical infants: A japaneseitalian cross-linguistic study. Cognitive Psychology, 57(1):56 -74.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Bayesian framework for word segmentation: Exploring the effects of context", "authors": [ { "first": "S", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2009, "venue": "Cognition", "volume": "112", "issue": "1", "pages": "21--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Goldwater, T. L. Griffiths, and M. Johnson. 2009. A Bayesian framework for word segmentation: Ex- ploring the effects of context. Cognition, 112(1):21- 54.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Producing power-law distributions and damping word frequencies with two-stage language models", "authors": [ { "first": "S", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2335--2382", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Goldwater, T. L. Griffiths, and M. Johnson. 2011. Producing power-law distributions and damping word frequencies with two-stage language models. Journal of Machine Learning Research, 12:2335- 2382.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Do 11-month-old French infants process articles?", "authors": [ { "first": "P", "middle": [ "A" ], "last": "Hall\u00e9", "suffix": "" }, { "first": "C", "middle": [], "last": "Durand", "suffix": "" }, { "first": "B", "middle": [], "last": "De Boysson-Bardies", "suffix": "" } ], "year": 2008, "venue": "Language and Speech", "volume": "51", "issue": "1-2", "pages": "23--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. A. Hall\u00e9, C. Durand, and B. de Boysson-Bardies. 2008. Do 11-month-old French infants process ar- ticles? Language and Speech, 51(1-2):23-44.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Word frequency as a cue for identifying function words in infancy", "authors": [ { "first": "J.-R", "middle": [], "last": "Hochmann", "suffix": "" }, { "first": "A", "middle": [ "D" ], "last": "Endress", "suffix": "" }, { "first": "J", "middle": [], "last": "Mehler", "suffix": "" } ], "year": 2010, "venue": "Cognition", "volume": "115", "issue": "3", "pages": "444--457", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.-R. Hochmann, A. D. Endress, and J. Mehler. 2010. Word frequency as a cue for identifying function words in infancy. Cognition, 115(3):444 -457.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "S", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2009, "venue": "The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "317--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson and S. Goldwater. 2009. Improving non- parameteric Bayesian inference: experiments on un- supervised word segmentation with adaptor gram- mars. In Proceedings of Human Language Tech- nologies: The 2009 Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics, pages 317-325, Boulder, Col- orado, June. Association for Computational Linguis- tics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bayesian inference for PCFGs via Markov chain Monte Carlo", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "T", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "S", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", "volume": "", "issue": "", "pages": "139--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson, T. Griffiths, and S. Goldwater. 2007a. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139- 146, Rochester, New York. Association for Compu- tational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "S", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2007, "venue": "Advances in Neural Information Processing Systems 19", "volume": "", "issue": "", "pages": "641--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson, T. L. Griffiths, and S. Goldwater. 2007b. Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In B. Sch\u00f6lkopf, J. Platt, and T. Hoffman, editors, Ad- vances in Neural Information Processing Systems 19, pages 641-648. MIT Press, Cambridge, MA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Synergies in learning words and their referents", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "K", "middle": [], "last": "Demuth", "suffix": "" }, { "first": "M", "middle": [], "last": "Frank", "suffix": "" }, { "first": "B", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2010, "venue": "Advances in Neural Information Processing Systems 23", "volume": "", "issue": "", "pages": "1018--1026", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson, K. Demuth, M. Frank, and B. Jones. 2010. Synergies in learning words and their refer- ents. In J. Lafferty, C. K. I. Williams, J. Shawe- Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1018-1026.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Exploiting social information in grounded language learning via grammatical reduction", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "K", "middle": [], "last": "Demuth", "suffix": "" }, { "first": "M", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "883--891", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson, K. Demuth, and M. Frank. 2012. Exploit- ing social information in grounded language learn- ing via grammatical reduction. In Proceedings of the 50th Annual Meeting of the Association for Com- putational Linguistics, pages 883-891, Jeju Island, Korea, July. Association for Computational Linguis- tics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Using Adaptor Grammars to identify synergies in the unsupervised acquisition of linguistic structure", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "398--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson. 2008. Using Adaptor Grammars to iden- tify synergies in the unsupervised acquisition of lin- guistic structure. In Proceedings of the 46th Annual Meeting of the Association of Computational Lin- guistics, pages 398-406, Columbus, Ohio. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Getting there faster: 18-and 24-month-old infants' use of function words to determine reference", "authors": [ { "first": "Y", "middle": [], "last": "Kedar", "suffix": "" }, { "first": "M", "middle": [], "last": "Casasola", "suffix": "" }, { "first": "B", "middle": [], "last": "Lust", "suffix": "" } ], "year": 2006, "venue": "Child Development", "volume": "77", "issue": "2", "pages": "325--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Kedar, M. Casasola, and B. Lust. 2006. Getting there faster: 18-and 24-month-old infants' use of function words to determine reference. Child De- velopment, 77(2):325-338.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Variational Bayesian grammar induction for natural language", "authors": [ { "first": "K", "middle": [], "last": "Kurihara", "suffix": "" }, { "first": "T", "middle": [], "last": "Sato", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Kurihara and T. Sato. 2006. Variational Bayesian grammar induction for natural language.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Grammatical Inference: Algorithms and Applications", "authors": [ { "first": "Y", "middle": [], "last": "In", "suffix": "" }, { "first": "S", "middle": [], "last": "Sakakibara", "suffix": "" }, { "first": "K", "middle": [], "last": "Kobayashi", "suffix": "" }, { "first": "T", "middle": [], "last": "Sato", "suffix": "" }, { "first": "E", "middle": [], "last": "Nishino", "suffix": "" }, { "first": "", "middle": [], "last": "Tomita", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "84--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Y. Sakakibara, S. Kobayashi, K. Sato, T. Nishino, and E. Tomita, editors, Grammatical Inference: Al- gorithms and Applications, pages 84-96. Springer.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Machine learning: a probabilistic perspective", "authors": [ { "first": "K", "middle": [ "P" ], "last": "Murphy", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. P. Murphy. 2012. Machine learning: a probabilistic perspective. The MIT Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An electrophysiological study of infants' sensitivity to the sound patterns of English speech", "authors": [ { "first": "V", "middle": [ "L" ], "last": "Shafer", "suffix": "" }, { "first": "D", "middle": [ "W" ], "last": "Shucard", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Shucard", "suffix": "" }, { "first": "L", "middle": [], "last": "Gerken", "suffix": "" } ], "year": 1998, "venue": "Journal of Speech, Language and Hearing Research", "volume": "41", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. L. Shafer, D. W. Shucard, J. L. Shucard, and L. Gerken. 1998. An electrophysiological study of infants' sensitivity to the sound patterns of English speech. Journal of Speech, Language and Hearing Research, 41(4):874.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The effect of functional morphemes on word segmentation in preverbal infants", "authors": [ { "first": "R", "middle": [], "last": "Shi", "suffix": "" }, { "first": "M", "middle": [], "last": "Lepage", "suffix": "" } ], "year": 2008, "venue": "Developmental Science", "volume": "11", "issue": "3", "pages": "407--413", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Shi and M. Lepage. 2008. The effect of functional morphemes on word segmentation in preverbal in- fants. Developmental Science, 11(3):407-413.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Syntactic categorization in French-learning infants", "authors": [ { "first": "R", "middle": [], "last": "Shi", "suffix": "" }, { "first": "A", "middle": [], "last": "Melan\u00e7on", "suffix": "" } ], "year": 2010, "venue": "Infancy", "volume": "15", "issue": "", "pages": "517--533", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Shi and A. Melan\u00e7on. 2010. Syntactic categoriza- tion in French-learning infants. Infancy, 15(517- 533).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Six-months old infants' preference for lexical words", "authors": [ { "first": "R", "middle": [], "last": "Shi", "suffix": "" }, { "first": "J", "middle": [], "last": "Werker", "suffix": "" } ], "year": 2001, "venue": "Psychological Science", "volume": "12", "issue": "", "pages": "71--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Shi and J. Werker. 2001. Six-months old infants' preference for lexical words. Psychological Science, 12:71-76.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Frequency and form as determinants of functor sensitivity in English-acquiring infants", "authors": [ { "first": "R", "middle": [], "last": "Shi", "suffix": "" }, { "first": "A", "middle": [], "last": "Cutler", "suffix": "" }, { "first": "J", "middle": [], "last": "Werker", "suffix": "" }, { "first": "M", "middle": [], "last": "Cruickshank", "suffix": "" } ], "year": 2006, "venue": "The Journal of the Acoustical Society of America", "volume": "119", "issue": "6", "pages": "61--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Shi, A. Cutler, J. Werker, and M. Cruickshank. 2006. Frequency and form as determinants of func- tor sensitivity in English-acquiring infants. The Journal of the Acoustical Society of America, 119(6):EL61-EL67.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Hierarchical Dirichlet processes", "authors": [ { "first": "Y", "middle": [ "W" ], "last": "Teh", "suffix": "" }, { "first": "M", "middle": [], "last": "Jordan", "suffix": "" }, { "first": "M", "middle": [], "last": "Beal", "suffix": "" }, { "first": "D", "middle": [], "last": "Blei", "suffix": "" } ], "year": 2006, "venue": "Journal of the American Statistical Association", "volume": "101", "issue": "", "pages": "1566--1581", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. W. Teh, M. Jordan, M. Beal, and D. Blei. 2006. Hi- erarchical Dirichlet processes. Journal of the Amer- ican Statistical Association, 101:1566-1581.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Increasing flexibility in children's online processing of grammatical and nonce determiners in fluent speech", "authors": [ { "first": "R", "middle": [], "last": "Zangl", "suffix": "" }, { "first": "A", "middle": [], "last": "Fernald", "suffix": "" } ], "year": 2007, "venue": "Language Learning and Development", "volume": "3", "issue": "3", "pages": "199--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Zangl and A. Fernald. 2007. Increasing flexibil- ity in children's online processing of grammatical and nonce determiners in fluent speech. Language Learning and Development, 3(3):199-231.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "j u w \u0251 n t t u s i \u00f0 \u0259 b \u028a ka word segmentation model should segment this as ju w\u0251nt tu si \u00f0\u0259 b\u028ak, which is the IPA representation of \"you want to see the book\".", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "Token and lexicon (i.e., type) f-score on the Bernstein-Ratner", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "Bayes factor in favour of left-peripheral \"function word\" attachment as a function of the number of sentences in the training corpus, calculated using the Harmonic Mean estimator (see warning in text).", "num": null, "type_str": "figure", "uris": null }, "TABREF1": { "num": null, "type_str": "table", "html": null, "content": "", "text": "doggy, house, want, I FuncWord1 : a, the, your, little 1 , in FuncWord2 : to, in, you, what, put FuncWord3 : you, a, what, no, can Interestingly, these categories seem fairly reasonable. The Word category includes open-class nouns and verbs, the FuncWord1 category includes noun modifiers such as determiners, while the FuncWord2 and FuncWord3 categories include prepositions, pronouns and auxiliary verbs." } } } }