Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:51:30.205064Z"
},
"title": "A Deep Generative Model of Vowel Formant Typology",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD"
}
},
"email": "ryan.cotterell@jhu.edu"
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {
"postCode": "21218",
"settlement": "Baltimore",
"region": "MD"
}
},
"email": "eisner@jhu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "What makes some types of languages more probable than others? For instance, we know that almost all spoken languages contain the vowel phoneme /i/; why should that be? The field of linguistic typology seeks to answer these questions and, thereby, divine the mechanisms that underlie human language. In our work, we tackle the problem of vowel system typology, i.e., we propose a generative probability model of which vowels a language contains. In contrast to previous work, we work directly with the acoustic information-the first two formant values-rather than modeling discrete sets of phonemic symbols (IPA). We develop a novel generative probability model and report results based on a corpus of 233 languages.",
"pdf_parse": {
"paper_id": "N18-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "What makes some types of languages more probable than others? For instance, we know that almost all spoken languages contain the vowel phoneme /i/; why should that be? The field of linguistic typology seeks to answer these questions and, thereby, divine the mechanisms that underlie human language. In our work, we tackle the problem of vowel system typology, i.e., we propose a generative probability model of which vowels a language contains. In contrast to previous work, we work directly with the acoustic information-the first two formant values-rather than modeling discrete sets of phonemic symbols (IPA). We develop a novel generative probability model and report results based on a corpus of 233 languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Human languages are far from arbitrary; crosslinguistically, they exhibit surprising similarity in many respects and many properties appear to be universally true. The field of linguistic typology seeks to investigate, describe and quantify the axes along which languages vary. One facet of language that has been the subject of heavy investigation is the nature of vowel inventories, i.e., which vowels a language contains. It is a cross-linguistic universal that all spoken languages have vowels (Gordon, 2016) , and the underlying principles guiding vowel selection are understood: vowels must be both easily recognizable and well-dispersed (Schwartz et al., 2005) . In this work, we offer a more formal treatment of the subject, deriving a generative probability model of vowel inventory typology. Our work builds on (Cotterell and Eisner, 2017) by investigating not just discrete IPA inventories but the cross-linguistic variation in acoustic formants.",
"cite_spans": [
{
"start": 498,
"end": 512,
"text": "(Gordon, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 644,
"end": 667,
"text": "(Schwartz et al., 2005)",
"ref_id": "BIBREF17"
},
{
"start": 821,
"end": 849,
"text": "(Cotterell and Eisner, 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The philosophy behind our approach is that linguistic typology should be treated probabilistically and its goal should be the construction of a universal prior over potential languages. A probabilistic approach does not rule out linguistic systems completely (as long as one's theoretical formalism can describe them at all), but it can position phenomena on a scale from very common to very improbable. Probabilistic modeling also provides a discipline for drawing conclusions from sparse data. While we know of over 7000 human languages, we have some sort of linguistic analysis for only 2300 of them (Comrie et al., 2013) , and the dataset used in this paper (Becker-Kristal, 2010) provides simple vowel data for fewer than 250 languages.",
"cite_spans": [
{
"start": 603,
"end": 624,
"text": "(Comrie et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Formants are the resonant frequencies of the human vocal tract during the production of speech sounds. We propose a Bayesian generative model of vowel inventories, where each language's inventory is a finite subset of acoustic vowels represented as points (F 1 , F 2 ) \u2208 R 2 . We deploy tools from the neural-network and point-process literatures and experiment on a dataset with 233 distinct languages. We show that our most complicated model outperforms simpler models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Much of human communication takes place through speech: one conversant emits a sound wave to be comprehended by a second. In this work, we consider the nature of the portions of such sound waves that correspond to vowels. We briefly review the relevant bits of acoustic phonetics so as to give an overview of the data we are actually modeling and develop our notation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic Phonetics and Formants",
"sec_num": "2"
},
{
"text": "The anatomy of a sound wave. The sound wave that carries spoken language is a function from time to amplitude, describing sound pressure variation in the air. To distinguish vowels, it is helpful to transform this function into a spectrogram (Fig. 1 ) by using a short-time Fourier transform /i/, /u/ and /A/. The x-axis is time and y-axis is frequency. The first two formants F1 and F2 are marked in with arrows for each vowel. The figure was made with Praat (Boersma et al., 2002) . (Deng and O'Shaughnessy, 2003 , Chapter 1) to decompose each short interval of the wave function into a weighted sum of sinusoidal waves of different frequencies (measured in Hz). At each interval, the variable darkness of the spectrogram indicates the weights of the different frequencies. In phonetic analysis, a common quantity to consider is a formant-a local maximum of the (smoothed) frequency spectrum. The fundamental frequency F 0 determines the pitch of the sound. The formants F 1 and F 2 determine the quality of the vowel.",
"cite_spans": [
{
"start": 460,
"end": 482,
"text": "(Boersma et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 485,
"end": 514,
"text": "(Deng and O'Shaughnessy, 2003",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "(Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Acoustic Phonetics and Formants",
"sec_num": "2"
},
{
"text": "Two is all you need (and what we left out). In terms of vowel recognition, it is widely speculated that humans rely almost exclusively on the first two formants of the sound wave (Ladefoged, 2001, Chapter 5) . The two-formant assumption breaks down in edge cases: e.g., the third formant F 3 helps to distinguish the roundness of the vowel (Ladefoged, 2001, Chapter 5) . Other non-formant features may also play a role. For example, in tonal languages, the same vowel may be realized with different tones (which are signaled using F 0 ): Mandarin Chinese makes a distinction between m\u01ce (horse) and m\u00e1 (hemp) without modifying the quality of the vowel /a/. Other features, such as creaky voice, can play a role in distinguishing phonemes.",
"cite_spans": [
{
"start": 179,
"end": 207,
"text": "(Ladefoged, 2001, Chapter 5)",
"ref_id": null
},
{
"start": 340,
"end": 368,
"text": "(Ladefoged, 2001, Chapter 5)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic Phonetics and Formants",
"sec_num": "2"
},
{
"text": "We do not explicitly model any of these aspects of vowel space, limiting ourselves to (F 1 , F 2 ) as in previous work (Liljencrants and Lindblom, 1972) . However, it would be easy to extend all the models we will propose here to incorporate such information, given appropriate datasets.",
"cite_spans": [
{
"start": 119,
"end": 152,
"text": "(Liljencrants and Lindblom, 1972)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic Phonetics and Formants",
"sec_num": "2"
},
{
"text": "The vowel inventories of the world's languages display clear structure and appear to obey several underlying principles. The most prevalent of these principles are focalization and dispersion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Phonology of Vowel Systems",
"sec_num": "3"
},
{
"text": "Focalization. The notion of focalization grew out of quantal vowel theory (Stevens, 1989) . Quantal vowels are those that are phonetically \"better\" than others. They tend to display certain properties, e.g., the formants tend to be closer together (Stevens, 1987) . Cross-linguistically, quantal vowels are the most frequently attested vowels, e.g., the cross-linguistically common vowel /i/ is considered quantal, but less common /y/ is not.",
"cite_spans": [
{
"start": 74,
"end": 89,
"text": "(Stevens, 1989)",
"ref_id": "BIBREF21"
},
{
"start": 248,
"end": 263,
"text": "(Stevens, 1987)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Phonology of Vowel Systems",
"sec_num": "3"
},
{
"text": "The second core principle of vowel system organization is known as dispersion. As the name would imply, the principle states that the vowels in \"good\" vowel systems tend to be spread out. The motivation for such a principle is clear-a well-dispersed set of vowels reduces a listener's potential confusion over which vowel is being pronounced. See Schwartz et al. (1997) for a review of dispersion in vowel system typology and its interaction with focalization, which has led to the joint dispersion-focalization theory.",
"cite_spans": [
{
"start": 347,
"end": 369,
"text": "Schwartz et al. (1997)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dispersion.",
"sec_num": null
},
{
"text": "Notation. We will denote the universal set of international phonetic alphabet (IPA) symbols as V. The observed vowel inventory for language has size n and is denoted",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dispersion.",
"sec_num": null
},
{
"text": "V = {(v 1 , v 1 ), . . . , (v n , v n )} \u2286 V \u00d7 R d , where for each k \u2208 [1, n ], v k \u2208 V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dispersion.",
"sec_num": null
},
{
"text": "is an IPA symbol assigned by a linguist and v k \u2208 R d is a vector of d measurable phonetic quantities. In short, the IPA symbol v k was assigned as a label for a phoneme with pronunciation v k . The ordering of the elements within V is arbitrary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dispersion.",
"sec_num": null
},
{
"text": "Goals. This framework recognizes that the same IPA symbol v (such as /u/) may represent a slightly different sound v in one language than in another, although they are transcribed identically. We are specifically interested in how the vowels in a language influence one another's fine-grained pronunciation in R d . In general, there is no reason to suspect that speakers of two languages, whose phonological systems contain the same IPA symbol, should produce that vowel with identical formants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dispersion.",
"sec_num": null
},
{
"text": "Data. For the remainder of the paper, we will take d = 2 so that each v = (F 1 , F 2 ) \u2208 R 2 , the vector consisting of the first two formant values, as compiled from the field literature by Becker-Kristal (2006) . This dataset provides inventories V in the form above. Thus, we do not consider further variation of the vowel pronunciation that may occur within the language (between speakers, between tokens of the vowel, or between earlier and later intervals within a token).",
"cite_spans": [
{
"start": 191,
"end": 212,
"text": "Becker-Kristal (2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dispersion.",
"sec_num": null
},
{
"text": "Previous work (Cotterell and Eisner, 2017) has placed a distribution over discrete phonemes, ignoring the variation across languages in the pronunciation of each phoneme. In this paper, we crack open the phoneme abstraction, moving to a learned set of finer-grained phones. Cotterell and Eisner (2017) proposed (among other options) using a determinantal point process (DPP) over a universal inventory V of 53 symbolic (IPA) vowels. A draw from such a DPP is a language-specific inventory of vowel phonemes, V \u2286 V. In this paper, we say that a language instead draws its inventory from a larger setV, again using a DPP. In both cases, the reason to use a DPP is that it prefers relatively diverse inventories whose individual elements are relatively quantal.",
"cite_spans": [
{
"start": 14,
"end": 42,
"text": "(Cotterell and Eisner, 2017)",
"ref_id": "BIBREF6"
},
{
"start": 274,
"end": 301,
"text": "Cotterell and Eisner (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemes versus Phones",
"sec_num": "4"
},
{
"text": "While we could in principle identifyV with R d , for convenience we still take it to be a (large) discrete finite setV = {v 1 , . . . ,v N }, whose elements we call phones.V is a learned cross-linguistic parameter of our model; thus, its elements-the \"universal phones\"-may or may not correspond to phonetic categories traditionally used by linguists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemes versus Phones",
"sec_num": "4"
},
{
"text": "We presume that language draws from the DPP a subsetV \u2286V, whose size we call n . For each universal phonev i that appears in this inventoryV , the language then draws an observable languagespecific pronunciation v i \u223c N \u00b5 i , \u03c3 2 I from a distribution associated cross-linguistically with the universal phonev i . We now have an inventory of pronunciations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemes versus Phones",
"sec_num": "4"
},
{
"text": "As a final step in generating the vowel inventory, we could model IPA labels. For eachv i \u2208V , a field linguist presumably draws the IPA label v i conditioned on all the pronunciations {v i \u2208 R d : v i \u2208V } in the inventory (and perhaps also on their underlying phonesv i \u2208V ). This labeling process may be complex. While each pronunciation in R d (or each underlying phone inV) may have a preference for certain IPA labels in V, the n labels must be drawn jointly because the linguist will take care not to use the same label for two phones, and also because the linguist may like to describe the inventory using a small number of distinct IPA features, which will tend to favor factorial grids of symbols. The linguist's use of IPA features may also be informed by phonological and phonetic processes in the language. We leave modeling of this step to future work; so our current likelihood term ignores the evidence contributed by the IPA labels in the dataset, considering only the pronunciations in R d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemes versus Phones",
"sec_num": "4"
},
{
"text": "The overall idea is that human languages draw their inventories from some universal prior, which we are attempting to reconstruct. A caveat is that we will train our method by maximum-likelihood, which does not quantify our uncertainty about the reconstructed parameters. An additional caveat is that some languages in our dataset are related to one another, which belies the idea that they were drawn independently. Ideally, one ought to capture these relationships using hierarchical or evolutionary modeling techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonemes versus Phones",
"sec_num": "4"
},
{
"text": "Before delving into our generative model, we briefly review technical background used by Cotterell and Eisner (2017). A DPP is a probability distribution over the subsets of a fixed ground set of size N -in our case, the set of phonesV. The DPP is usually given as an L-ensemble (Borodin and Rains, 2005) , meaning that it is parameterized by a positive semi-definite matrix L \u2208 R N \u00d7N . Given a discrete base setV of phones, the probability of a subsetV \u2286V is given by",
"cite_spans": [
{
"start": 279,
"end": 304,
"text": "(Borodin and Rains, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Determinantal Point Processes",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(V ) \u221d det (LV ) ,",
"eq_num": "(1)"
}
],
"section": "Determinantal Point Processes",
"sec_num": "5"
},
{
"text": "where LV is the submatrix of L corresponding to the rows and columns associated with the subset V \u2286V. The entry L ij , where i = j, has the effect of describing the similarity between the elements v i andv j (both inV)-an ingredient needed to model dispersion. And, the entry L ii describes the quality-focalization-of the vowelv i , i.e., how much the model wants to havev i in a sampled set independent of the other members.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determinantal Point Processes",
"sec_num": "5"
},
{
"text": "In this work, each phonev i \u2208V is associated with a probability density over the space of possible pronunciations R 2 . Our measure of phone similarity will consider the \"overlap\" between the densities associated with two phones. This works as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Kernel",
"sec_num": "5.1"
},
{
"text": "Given two densities f (x, y) and f (x, y) over R 2 , we define the kernel (Jebara et al., 2004) as Figure 2 : Joint likelihood of M vowel systems under our deep generative probability model for continuous-space vowel inventories. Here language has an observed inventory of pronunciations {v ,k : 1 \u2264 k \u2264 n }, and a k \u2208 [1, N ] denotes a phone that might be responsible for the pronunciation v ,k . Thus, a denotes some way to jointly label all n pronunciations with distinct phones. We must sum over all N n such labelings a \u2208 A(n , N ) since the true labeling is not observed. In other words, we sum over all ways a of completing the data for language . Within each summand, the product of factors 3 and 4 is the probability of the completed data, i.e., the joint probability of generating the inventoryV (a ) of phones used in the labeling and their associated pronunciations. Factor 3 considers the prior probability ofV (a ) under the DPP, and factor 4 is a likelihood term that considers the probability of the associated pronunciations.",
"cite_spans": [
{
"start": 74,
"end": 95,
"text": "(Jebara et al., 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 99,
"end": 107,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Probability Kernel",
"sec_num": "5.1"
},
{
"text": "K(f, f ; \u03c1) = x y f (x, y) \u03c1 f (x, y) \u03c1 dx dy, (3) M =1 p(v ,1 , . . . , v ,n | \u00b5 1 , . . . , \u00b5 N , N ) p(\u00b5 1 , . . . \u00b5 N | N ) p(N ) (2) = M =1 a \u2208A(n ,N ) n k=1 p(v ,k | \u00b5 a k ) 4 p(V (a ) | \u00b5 1 , . . . , \u00b5 N , N ) 3 p(\u00b5 1 , . . . \u00b5 N | N ) 2 p(N ) 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Kernel",
"sec_num": "5.1"
},
{
"text": "with inverse temperature parameter \u03c1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Kernel",
"sec_num": "5.1"
},
{
"text": "In our setting, f, f will both be Gaussian distributions with means \u00b5 and \u00b5 that share a fixed spherical covariance matrix \u03c3 2 I. Then eq. 3and indeed its generalization to any R d has a closedform solution (Jebara et al., 2004, \u00a73.1) :",
"cite_spans": [
{
"start": 207,
"end": 234,
"text": "(Jebara et al., 2004, \u00a73.1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Kernel",
"sec_num": "5.1"
},
{
"text": "K(f,f ; \u03c1) = (4) (2\u03c1) d 2 2\u03c0\u03c3 2 (1\u22122\u03c1)d 2 exp \u2212 \u03c1||\u00b5 \u2212 \u00b5 || 2 4\u03c3 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Kernel",
"sec_num": "5.1"
},
{
"text": "Notice that making \u03c1 small (i.e., high temperature) has an effect on (4) similar to scaling the variance \u03c3 2 by the temperature, but it also results in changing the scale of K, which affects the balance between dispersion and focalization in (6) below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Kernel",
"sec_num": "5.1"
},
{
"text": "The probability kernel given in eq. (3) naturally handles the linguistic notion of dispersion. What about focalization? We say that a phone is focal to the extent that it has a high score",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Focalization Score",
"sec_num": "5.2"
},
{
"text": "F (\u00b5) = exp (U 2 tanh(U 1 \u00b5 + b 1 ) + b 2 ) > 0 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Focalization Score",
"sec_num": "5.2"
},
{
"text": "where \u00b5 is the mean of its density. To learn the parameters of this neural network from data is to learn which phones are focal. We use a neural network since the focal regions of R 2 are distributed in a complex way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Focalization Score",
"sec_num": "5.2"
},
{
"text": "If",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The L Matrix",
"sec_num": "5.3"
},
{
"text": "f i = N (\u00b5 i , \u03c3 2 I)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The L Matrix",
"sec_num": "5.3"
},
{
"text": "is the density associated with the phonev i , we may populate an N \u00d7 N real",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The L Matrix",
"sec_num": "5.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Algorithm 1 Generative Process 1: N \u223c Poisson (\u03bb) (\u2208 N) 1 2: for i = 1 to N : 3: \u00b5 i \u223c N (0, I) (\u2208 R 2 ) 2 4: define L \u2208 R N \u00d7N via (6) 5: for = 1 to M : 6:V \u223c DPP (L) (\u2286 [1, N ]); let n = |V | 3 7: for i \u2208V : 8:\u1e7d i \u223c N \u00b5 i , \u03c3 2 I 4 9: v i = \u03bd \u03b8 \u1e7d i 4 matrix L where L ij = K(f i , f j ; \u03c1) if i = j K(f i , f j ; \u03c1) + F (\u00b5 i ) if i = j",
"eq_num": "(6)"
}
],
"section": "The L Matrix",
"sec_num": "5.3"
},
{
"text": "Since L is the sum of two positive definite matrices (the first specializes a known kernel and the second is diagonal and positive), it is also positive definite. As a result, it can be used to parameterize a DPP overV. Indeed, since L is positive definite and not merely positive semidefinite, it will assign positive probability to any subset ofV.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The L Matrix",
"sec_num": "5.3"
},
{
"text": "As previously noted, this DPP does not define a distribution over an infinite set, e.g., the powerset of R 2 , as does recent work on continuous DPPs (Affandi et al., 2013). Rather, it defines a distribution over the powerset of a set of densities with finite cardinality. Once we have sampled a subset of densities, a real-valued quantity may be additionally sampled from each sampled density.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The L Matrix",
"sec_num": "5.3"
},
{
"text": "We are now in a position to expound our generative model of continuous-space vowel typology. We generate a set of formant pairs for M languages in a four step process. Note that throughout this exposition, language-specific quantities with be superscripted with an integral language marker , whereas universal quantities are left unsuperscripted. The generative process is written in algorithmic form in Alg. 1. Note that each step is numbered and color-coded for ease of comparison with the full joint likelihood in Fig. 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 517,
"end": 523,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "Step 1 : p(N ). We sample the size N of the universal phone inventoryV from a Poisson distribution with a rate parameter \u03bb, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N \u223c Poisson (\u03bb) .",
"eq_num": "(7)"
}
],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "That is, we do not presuppose a certain number of phones in the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "Step 2 : p(\u00b5 1 , . . . , \u00b5 N ). Next, we sample the means \u00b5 i of the Gaussian phones. In the model presented here, we assume that each phone is generated independently, so",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "p(\u00b5 1 , . . . , \u00b5 N ) = N i=1 p(\u00b5 i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "Also, we assume a standard Gaussian prior over the means, \u00b5 i \u223c N (0, I).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "The sampled means define our N Gaussian phones N \u00b5 i , \u03c3 2 I : we are assuming for simplicity that all phones share a single spherical covariance matrix, defined by the hyperparameter \u03c3 2 . The dispersion and focalization of these phones define the matrix L according to equations (4)-(6), where \u03c1 in (4) and the weights of the focalization neural net (5) are also hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "Step",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "3 : p(V | \u00b5 1 , . . . , \u00b5 N ). Next, for each lan- guage \u2208 [1, . . . , M ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": ", we sample a diverse subset of the N phones, via a single draw from a DPP parameterized by matrix L:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V \u223c DPP(L),",
"eq_num": "(8)"
}
],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "whereV \u2286 [1, N ]. Thus, i \u2208V means that language contains phonev i . Note that even the size of the inventory, n = |V |, was chosen by the DPP. In general, we have n N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "Step",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "4 : i\u2208V p(v i | \u00b5 i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "The final step in our generative process is that the phonesv i in language must generate the pronunciations v i \u2208 R 2 (formant vectors) that are actually observed in language . Each vector takes two steps. For each i \u2208V , we generate an underlying\u1e7d i \u2208 R 2 from the corresponding Gaussian phone. Then, we run this vector through a feed-forward neural network \u03bd \u03b8 with parameters \u03b8. In short:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "v i \u223c N (\u00b5 i , \u03c3 2 I) (9) v i = \u03bd \u03b8 (\u1e7d i ),",
"eq_num": "(10)"
}
],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "where the second step is deterministic. We can fuse these two steps into a single step p(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "v i | \u00b5 i ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "whose closed-form density is given in eq. (12) below. In effect, step 4 takes a Gaussian phone as input and produces the observed formant vector with an underlying formant vector in the middle. This completes our generative process. We do not observe all the steps, but only the final collection of pronunciations v i for each language, where the subscripts i that indicate phone identity have been lost. The probability of this incomplete dataset involves summing over possible phones for each pronunciation, and is presented in Fig. 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 530,
"end": 536,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Deep Generative Model",
"sec_num": "6"
},
{
"text": "A crucial bit of our model is running a sample from a Gaussian through a neural network. Under certain restrictions, we can find a closed form for the resulting density; we discuss these below. Let \u03bd \u03b8 be a depth-2 multi-layer perceptron",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Neural Transformation of a Gaussian",
"sec_num": "6.1"
},
{
"text": "\u03bd \u03b8 (\u1e7d i ) = W 2 tanh (W 1\u1e7di + b 1 ) + b 2 . (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Neural Transformation of a Gaussian",
"sec_num": "6.1"
},
{
"text": "In order to find a closed-form solution, we require that (5) be a diffeomorphism, i.e., an invertible mapping from R 2 \u2192 R 2 where both \u03bd \u03b8 and its inverse \u03bd \u22121 \u03b8 are differentiable. This will be true as long as W 1 , W 2 \u2208 R 2\u00d72 are square matrices of fullrank and we choose a smooth, invertible activation function, such as tanh. Under those conditions, we may apply the standard theorem for transforming a random variable (see Stark and Woods, 2011) :",
"cite_spans": [
{
"start": 430,
"end": 452,
"text": "Stark and Woods, 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Neural Transformation of a Gaussian",
"sec_num": "6.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(v i | \u00b5 i ) = p(\u03bd \u22121 \u03b8 (v i ) | \u00b5 i ) det J \u03bd \u22121 \u03b8 (v i ) = p(\u1e7d i | \u00b5 i ) det J \u03bd \u22121 \u03b8 (v i )",
"eq_num": "(12)"
}
],
"section": "A Neural Transformation of a Gaussian",
"sec_num": "6.1"
},
{
"text": "where J \u03bd \u22121 \u03b8 (x) is the Jacobian of the inverse of the neural network at the point x. Recall that p(\u1e7d i | \u00b5 i ) is Gaussian-distributed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Neural Transformation of a Gaussian",
"sec_num": "6.1"
},
{
"text": "Imbued in our generative story are a number of assumptions about the linguistic processes behind vowel inventories. We briefly draw connections between our theory and the linguistics literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions",
"sec_num": "7"
},
{
"text": "Why underlying phones? A technical assumption of our model is the existence of a universal set of underlying phones. Each phone is equipped with a probability distribution over reported acoustic measurements (pronunciations), to allow for a single phone to account for multiple slightly different pronunciations in different languages (though never in the same language). This distribution can capture both actual interlingual variation and also random noise in the measurement process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions",
"sec_num": "7"
},
{
"text": "While our universal phones may seem to resemble the universal IPA symbols used in phonological transcription, they lack the rich featural specifications of such phonemes. A phone in our model has no features other than its mean position, which wholly determines its behavior. Our universal phones are not a substantive linguistic hypothesis, but are essentially just a way of partitioning R 2 into finitely many small regions whose similarity and focalization can be precomputed. This technical trick allows us to use a discrete rather than a continuous DPP over the R 2 space. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions",
"sec_num": "7"
},
{
"text": "Why a neural network? Our phones are Gaussians of spherical variance \u03c3 2 , presumed to be scattered with variance 1 about a two-dimensional latent vowel space. Distances in this latent space are used to compute the dissimilarity of phones for modeling dispersion, and also to describe the phone's ability to vary across languages. That is, two phones that are distant in the latent space can appear in the same inventory-presumably they are easy to discriminate in both perception and articulation-and it is easy to choose which one better explains an acoustic measurement, thereby affecting the other measurements that may appear in the inventory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Assumptions",
"sec_num": "7"
},
{
"text": "We relate this latent space to measurable acoustic space by a learned diffeomorphism \u03bd \u03b8 (Cotterell and Eisner, 2017) . \u03bd \u22121 \u03b8 can be regarded as warping the acoustic distances into perceptual/articulatory distances. In some \"high-resolution\" regions of acoustic space, phones with fairly similar (F 1 , F 2 ) values might yet be far apart in the latent space. Conversely, in other regions, relatively large acous-1 Indeed, we could have simply taken our universal phone set to be a huge set of tiny, regularly spaced overlapping Gaussians that \"covered\" (say) the unit circle. As a computational matter, we instead opted to use a smaller set of Gaussians, giving the learner the freedom to infer their positions and tune their variance \u03c3 2 . Because of this freedom, this set should not be too large, or a MAP learner may overfit the training data with zero-variance Gaussians and be unable to explain the test languages-similar to overfitting a Gaussian mixture model. tic changes in some direction might not prevent two phones from acting as similar or two pronunciations from being attributed to the same phone. In general, a unit circle of radius \u03c3 in latent space may be mapped by \u03bd \u03b8 to an oddly shaped connected region in acoustic space, and a Gaussian in latent space may be mapped to a multimodal distribution.",
"cite_spans": [
{
"start": 89,
"end": 117,
"text": "(Cotterell and Eisner, 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 297,
"end": 309,
"text": "(F 1 , F 2 )",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Modeling Assumptions",
"sec_num": "7"
},
{
"text": "We fit our model via MAP-EM (Dempster et al., 1977) . The E-step involves deciding which phones each language has. To achieve this, we fashion a Gibbs sampler (Geman and Geman, 1984) , yielding a Markov-Chain Monte Carlo E-step (Levine and Casella, 2001 ).",
"cite_spans": [
{
"start": 28,
"end": 51,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF7"
},
{
"start": 159,
"end": 182,
"text": "(Geman and Geman, 1984)",
"ref_id": "BIBREF9"
},
{
"start": 228,
"end": 253,
"text": "(Levine and Casella, 2001",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference and Learning",
"sec_num": "8"
},
{
"text": "Inference in our model is intractable even when the phones \u00b5 1 , . . . , \u00b5 N are fixed. Given a language with n vowels, we have to determine which subset of the N phones best explains those vowels. As discussed above, the alignment a between the n vowels and n of the N phones represents a latent variable. Marginalizing it out is #P-hard, as we can see that it is equivalent to summing over all bipartite matchings in a weighted graph, which, in turn, is as costly as computing the permanent of a matrix (Valiant, 1979) . Our sampler 2 is an approximation algorithm for the task. We are interested in sampling a, the labeling of observed vowels with universal phones. Note that this implicitly samples the language's phone inventoryV (a), which is fully determined by a.",
"cite_spans": [
{
"start": 505,
"end": 520,
"text": "(Valiant, 1979)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference: MCMC E-Step",
"sec_num": "8.1"
},
{
"text": "Specifically, we employ an MCMC method closely related to Gibbs sampling. At each step of the sampler, we update our vowel-phone alignment a as follows. Choose a language and a vowel index k \u2208 [1, n ], and let i = a k (that is, pronunciation v ,k is currently labeled with universal phonev i ). We will consider changing a k to j, where j is drawn from the (N \u2212 n ) phones that do not appear inV (a ), heuristically choosing j in proportion to the likelihood p(v ,k | \u00b5 j ). We then stochastically decide whether to keep a k = i or set a k = j in proportion to the resulting values of the product 4 \u2022 3 in eq. (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference: MCMC E-Step",
"sec_num": "8.1"
},
{
"text": "For a single E-step, the Gibbs sampler \"warmstarts\" with the labeling from the end of the previous iteration's E-step. It sweeps S = 5 times through all vowels for all languages, and returns S sampled labelings, one from the end of each sweep.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inference: MCMC E-Step",
"sec_num": "8.1"
},
{
"text": "We are also interested in automatically choosing the number of phones N , for which we take the Poisson's rate parameter \u03bb = 100. To this end, we employ reversible-jump MCMC (Green, 1995) , resampling N at the start of every E-step.",
"cite_spans": [
{
"start": 174,
"end": 187,
"text": "(Green, 1995)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inference: MCMC E-Step",
"sec_num": "8.1"
},
{
"text": "Given the set of sampled alignments provided by the E-step, our M-step consists of optimizing the log-likelihood of the now-complete training data using the inferred latent variables. We achieved this through SGD training of the diffeomorphism parameters \u03b8, the means \u00b5 i of the Gaussian phones, and the parameters of the focalization kernel F.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning: M-Step",
"sec_num": "8.2"
},
{
"text": "Our data is taken from the Becker-Kristal corpus (Becker-Kristal, 2006) , which is a compilation of various phonetic studies and forms the largest multilingual phonetic database. Each entry in the corpus corresponds to a linguist's phonetic description of a language's vowel system: an inventory consisting of IPA symbols where each symbol is associated with two or more formant values. The corpus contains data from 233 distinct languages. When multiple inventories were available for the same language (due to various studies in the literature), we selected one at random and discarded the others.",
"cite_spans": [
{
"start": 49,
"end": 71,
"text": "(Becker-Kristal, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "9.1"
},
{
"text": "Baseline #1: Removing dispersion. The key technical innovation in our work lies in the incorporation of a DPP into a generative model of vowel formants-a continuous-valued quantity. The role of the DPP was to model the linguistic principle of dispersion-we may cripple this portion of our model, e.g., by forcing K to be a diagonal kernel, i.e., K ij = 0 for i = j. In this case the DPP becomes a Bernoulli Point Process (BPP)-a special case of the DPP. Since dispersion is widely accepted to be an important principle governing naturally occurring vowel systems, we expect a system trained without such knowledge to perform worse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "9.2"
},
{
"text": "Baseline #2: Removing the neural network \u03bd \u03b8 . Another question we may ask of our formulation is whether we actually need a fancy neural mapping \u03bd \u03b8 to model our typological data well. The human perceptual system is known to perform a non-linear transformation on acoustic signals, starting with the non-linear cochlear transform that is physically performed in the ear. While \u03bd \u22121 \u03b8 is intended as loosely analogous, we determine its benefit by removing eq. (10) from our generative story, i.e., we take the observed formants v k to arise directly from the Gaussian phones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "9.2"
},
{
"text": "Baseline #3: Supervised phones and alignments. A final baseline we consider is supervised phones. Linguists standardly employ a finite set of phonessymbols from the international phonetic alphabet (IPA). In phonetic annotation, it is common to map each sound in a language back to this universal discrete alphabet. Under such an annotation scheme, it is easy to discern, cross-linguistically, which vowels originate from the same phoneme: an /I/ in German may be roughly equated with an /I/ in English. However, it is not clear how consistent this annotation truly is. There are several reasons to expect high-variance in the cross-linguistic acoustic signal. First, IPA symbols are primarily useful for interlinked phonological distinctions, i.e., one applies the symbol /I/ to distinguish it from /i/ in the given language, rather than to associate it with the sound bearing the same symbol in a second language. Second, field linguists often resort to the closest common IPA symbol, rather than an exact match: if a language makes no distinction between /i/ and /I/, it is more common to denote the sound with a /i/. Thus, IPA may not be as universal as hoped. Our dataset contains 50 IPA symbols so this baseline is only reported for N = 50.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "9.2"
},
{
"text": "Evaluation in our setting is tricky. The scientific goal of our work is to place a bit of linguistic theory on a firm probabilistic footing, rather than a downstream engineering-task, whose performance we could measure. We consider three metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "9.3"
},
{
"text": "Cross-Entropy. Our first evaluation metric is cross-entropy: the average negative log-probability of the vowel systems in held-out test data, given the universal inventory of N phones that we trained through EM. We find this to be the cleanest method for scientific evaluation-it is the metric of optimization and has a clear interpretation: how surprised was the model to see the vowel systems of held-out, but attested, languages?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "9.3"
},
{
"text": "The cross-entropy is the negative log of the and expected Euclidean-distance error of the cloze prediction (lower is better). The overall best value for each task is boldfaced. The case N = 50 is compared against our supervised baseline. The N = 57 row is the case where we allowed N to fluctuate during inference using reversible-jump MCMC; this was the N value selected at the final EM iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "9.3"
},
{
"text": "ing over held-out languages. 3 Wallach et al. (2009) give several methods for estimating the intractable sum in language . We use the simple harmonic mean estimator, based on 50 samples of a drawn with our Gibbs sampler (warm-started from the final E-step of training).",
"cite_spans": [
{
"start": 29,
"end": 30,
"text": "3",
"ref_id": null
},
{
"start": 31,
"end": 52,
"text": "Wallach et al. (2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "9.3"
},
{
"text": "Cloze Evaluation. In addition, following Cotterell and Eisner (2017), we evaluate our trained model's ability to perform a cloze task (Taylor, 1953) . Given n \u2212 1 or n \u2212 2 of the vowels in heldout language , can we predict the pronunciations v k of the remaining 1 or 2? We predict v k to be \u03bd \u03b8 (\u00b5 i ) where i = a k is the phone inferred by the sampler. Note that the sampler's inference here is based only on the observed vowels (the likelihood) and the focalization-dispersion preferences of the DPP (the prior). We report the expected error of such a prediction-where error is quantified by Euclidean distance in (F 1 , F 2 ) formant space-over the same 50 samples of a . For instance, consider a previously unseen vowel system with formant values {(499, 2199), (861, 1420), (571, 1079)}. A \"cloze1\" evaluation would aim to predict {(499, 2199)} as the missing 3 Since that expression is the product of both probability distributions and probability densities, our \"cross-entropy\" metric is actually the sum of both entropy terms and (potentially negative) differential entropy terms. Thus, a value of 0 has no special significance. vowel, given {(861, 1420), (571, 1079)}, and the fact that n = 3. A \"cloze12\" evaluation would aim to predict two missing vowels.",
"cite_spans": [
{
"start": 134,
"end": 148,
"text": "(Taylor, 1953)",
"ref_id": "BIBREF22"
},
{
"start": 865,
"end": 866,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "9.3"
},
{
"text": "Here, we report experimental details and the hyperparameters that we use to achieve the results reported. We consider a neural network \u03bd \u03b8 with k \u2208 [1, 4] layers and find k = 1 the best performer on development data. Recall that our diffeomorphism constraint requires that each layer have exactly two hidden units, the same as the number of observed formants. We consider N \u2208 {15, 25, 50, 100} phones as well as letting N fluctuate with reversible-jump MCMC (see footnote 1). We train for 100 iterations of EM, taking S = 5 samples at each E-step. At each M-step, we run 50 iterations of SGD for the focalization NN and also for the diffeomorphism NN. For each N , we selected (\u03c3 2 , \u03c1) by minimizing cross-entropy on a held-out development set. We considered (\u03c3 2 , \u03c1) \u2208 {10 k } 5 k=1 \u00d7 {\u03c1 k } 5 k=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Details",
"sec_num": "9.4"
},
{
"text": "We report results in Tab. 1. We find that our DPP model improves over the baselines. The results support two claims: (i) dispersion plays an important role in the structure of vowel systems and (ii) learning a non-linear transformation of a Gaussian improves our ability to model sets of formant-pairs. Also, we observe that as we increase the number of phones, the role of the DPP becomes more important. We visualize a sample of the trained alignment in Fig. 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 456,
"end": 462,
"text": "Fig. 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "9.5"
},
{
"text": "Frequency Encodes Dispersion. Why does dispersion not always help? The models with fewer phones do not reap the benefits that the models with more phones do. The reason lies in the fact that the most common vowel formants are already dispersed. This indicates that we still have not quite modeled the mechanisms that select for good vowel formants, despite our work at the phonetic level; further research is needed. We would prefer a model that explains the evolutionary motivation of sound systems as communication systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "9.5"
},
{
"text": "Number of Induced Phones. What is most salient in the number of induced phones is that it is close to the number of IPA phonemes in the data. However, the performance of the phonemesupervised system is much worse, indicating that, perhaps, while the linguists have the right idea about the number of universal symbols, they did not specify the correct IPA symbol in all cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "9.5"
},
{
"text": "Our data analysis indicates that this is often due to pragmatic concerns in linguistic field analysis. For example, even if /I/ is the proper IPA symbol for the sound, if there is no other sound in the vicinity the annotator may prefer to use more common /i/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Error Analysis",
"sec_num": "9.5"
},
{
"text": "Most closely related to our work is the classic study of Liljencrants and Lindblom (1972) , who provide a simulation-based account of vowel systems. They argued that minima of a certain objective that encodes dispersion should correspond to canonical vowel systems of a given size n. Our tack is different in that we construct a generative probability model, whose parameters we learn from data. However, the essence of modeling is the same in that we explain formant values, rather than discrete IPA symbols. By extension, our work is also closely related to extensions of this theory (Schwartz et al., 1997; Roark, 2001 ) that focused on incorporating the notion of focalization into the experiments.",
"cite_spans": [
{
"start": 57,
"end": 89,
"text": "Liljencrants and Lindblom (1972)",
"ref_id": "BIBREF15"
},
{
"start": 586,
"end": 609,
"text": "(Schwartz et al., 1997;",
"ref_id": "BIBREF18"
},
{
"start": 610,
"end": 621,
"text": "Roark, 2001",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "10"
},
{
"text": "Our present paper can also be regarded as a continuation of Cotterell and Eisner (2017) , in which we used DPPs to model vowel inventories as sets of discrete IPA symbols. That paper pretended that each IPA symbol had a single cross-linguistic (F 1 , F 2 ) pair, an idealization that we remove in this paper by discarding the IPA symbols and modeling formant values directly.",
"cite_spans": [
{
"start": 60,
"end": 87,
"text": "Cotterell and Eisner (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "10"
},
{
"text": "Our model combines existing techniques of probabilistic modeling and inference to attempt to fit the actual distribution of the world's vowel systems. We presented a generative probability model of sets of measured (F 1 , F 2 ) pairs. We view this as a necessary step in the development of generative probability models that can explain the distribution of the world's languages. Previous work on generating vowel inventories has focused on how those inventories were transcribed into IPA by field linguists, whereas we focus on the field linguists' acoustic measurements of how the vowels are actually pronounced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "11"
},
{
"text": "Taken fromVolkovs and Zemel (2012, 3.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to acknowledge Tim Vieira, Katharina Kann, Sebastian Mielke and Chu-Cheng Lin for reading many early drafts. The first author would like to acknowledge an NDSEG grant and a Facebook PhD fellowship. This material is also based upon work supported by the National Science Foundation under Grant No. 1718846 to the last author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Approximate inference in continuous determinantal processes",
"authors": [
{
"first": "Raja",
"middle": [],
"last": "Hafiz Affandi",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1430--1438",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raja Hafiz Affandi, Emily Fox, and Ben Taskar. 2013. Approximate inference in continuous determinantal processes. In Advances in Neural Information Pro- cessing Systems, pages 1430-1438.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Predicting vowel inventories: The dispersion-focalization theory revisited",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Becker-Kristal",
"suffix": ""
}
],
"year": 2006,
"venue": "The Journal of the Acoustical Society of America",
"volume": "120",
"issue": "5",
"pages": "3248--3248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Becker-Kristal. 2006. Predicting vowel inven- tories: The dispersion-focalization theory revisited. The Journal of the Acoustical Society of America, 120(5):3248-3248.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Acoustic Typology of Vowel Inventories and Dispersion Theory: Insights from a Large Cross-Linguistic Corpus",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Becker-Kristal",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Becker-Kristal. 2010. Acoustic Typology of Vowel Inventories and Dispersion Theory: Insights from a Large Cross-Linguistic Corpus. Ph.D. thesis, UCLA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Praat, a system for doing phonetics by computer",
"authors": [
{
"first": "Paulus Petrus Gerardus",
"middle": [],
"last": "Boersma",
"suffix": ""
}
],
"year": 2002,
"venue": "Glot International",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paulus Petrus Gerardus Boersma et al. 2002. Praat, a system for doing phonetics by computer. Glot Inter- national, 5.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Eynard-Mehta theorem, Schur process, and their Pfaffian analogs",
"authors": [
{
"first": "Alexei",
"middle": [],
"last": "Borodin",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"M"
],
"last": "Rains",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Statistical Physics",
"volume": "121",
"issue": "3-4",
"pages": "291--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexei Borodin and Eric M. Rains. 2005. Eynard- Mehta theorem, Schur process, and their Pfaffian analogs. Journal of Statistical Physics, 121(3- 4):291-317.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Introduction",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Comrie",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Dryer",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gil",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Haspelmath",
"suffix": ""
}
],
"year": 2013,
"venue": "The World Atlas of Language Structures Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Comrie, Matthew S. Dryer, David Gil, and Martin Haspelmath. 2013. Introduction. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Probabilistic typology: Deep generative models of vowel inventories",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell and Jason Eisner. 2017. Probabilistic typology: Deep generative models of vowel inven- tories. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (ACL), Vancouver, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "Arthur",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "Donald",
"middle": [
"B"
],
"last": "",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Rtatistical Society, Series B (Statistical Methodology)",
"volume": "",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur P. Dempster, Nan M. Laird, and Donald B. Ru- bin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Rta- tistical Society, Series B (Statistical Methodology), pages 1-38.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Speech Processing: A Dynamic and Optimization-Oriented Approach",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Douglas O'",
"middle": [],
"last": "Shaughnessy",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Deng and Douglas O'Shaughnessy. 2003. Speech Processing: A Dynamic and Optimization-Oriented Approach. CRC Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Geman",
"suffix": ""
}
],
"year": 1984,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "6",
"pages": "721--741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Geman and Donald Geman. 1984. Stochas- tic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6):721-741.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Phonological Typology",
"authors": [
{
"first": "Matthew",
"middle": [
"K"
],
"last": "Gordon",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew K. Gordon. 2016. Phonological Typology. Oxford.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Reversible jump Markov chain Monte Carlo computation and Bayesian model determination",
"authors": [
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Green",
"suffix": ""
}
],
"year": 1995,
"venue": "Biometrika",
"volume": "82",
"issue": "4",
"pages": "711--732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter J. Green. 1995. Reversible jump Markov chain Monte Carlo computation and Bayesian model de- termination. Biometrika, 82(4):711-732.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Probability product kernels",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Jebara",
"suffix": ""
},
{
"first": "Risi",
"middle": [],
"last": "Kondor",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Howard",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Machine Learning Research",
"volume": "5",
"issue": "",
"pages": "819--844",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Jebara, Risi Kondor, and Andrew Howard. 2004. Probability product kernels. Journal of Machine Learning Research, 5:819-844.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Vowels and Consonants: An Introduction to the Sounds of Languages",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Ladefoged",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Ladefoged. 2001. Vowels and Consonants: An Introduction to the Sounds of Languages. Wiley- Blackwell.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Implementations of the Monte Carlo EM algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Levine",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Casella",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Computational and Graphical Statistics",
"volume": "10",
"issue": "3",
"pages": "422--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard A. Levine and George Casella. 2001. Im- plementations of the Monte Carlo EM algorithm. Journal of Computational and Graphical Statistics, 10(3):422-439.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Numerical simulation of vowel quality systems: The role of perceptual contrast. Language",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Liljencrants",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Lindblom",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "839--862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Liljencrants and Bj\u00f6rn Lindblom. 1972. Numer- ical simulation of vowel quality systems: The role of perceptual contrast. Language, pages 839-862.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Explaining vowel inventory tendencies via simulation: Finding a role for quantal locations and formant normalization",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2001,
"venue": "North East Linguistic Society",
"volume": "31",
"issue": "",
"pages": "419--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Roark. 2001. Explaining vowel inventory ten- dencies via simulation: Finding a role for quantal locations and formant normalization. In North East Linguistic Society, volume 31, pages 419-434.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The dispersion-focalization theory of sound systems",
"authors": [
{
"first": "Jean-Luc",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Abry",
"suffix": ""
},
{
"first": "Louis-Jean",
"middle": [],
"last": "Bo\u00eb",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Vall\u00e9e",
"suffix": ""
},
{
"first": "Lucie",
"middle": [],
"last": "M\u00e9nard",
"suffix": ""
}
],
"year": 2005,
"venue": "The Journal of the Acoustical Society of America",
"volume": "117",
"issue": "4",
"pages": "2422--2422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Luc Schwartz, Christian Abry, Louis-Jean Bo\u00eb, Nathalie Vall\u00e9e, and Lucie M\u00e9nard. 2005. The dispersion-focalization theory of sound systems. The Journal of the Acoustical Society of America, 117(4):2422-2422.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The dispersionfocalization theory of vowel systems",
"authors": [
{
"first": "Jean-Luc",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Louis-Jean",
"middle": [],
"last": "Bo\u00eb",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Vall\u00e9e",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Abry",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Phonetics",
"volume": "25",
"issue": "3",
"pages": "255--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Luc Schwartz, Louis-Jean Bo\u00eb, Nathalie Vall\u00e9e, and Christian Abry. 1997. The dispersion- focalization theory of vowel systems. Journal of Phonetics, 25(3):255-286.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Probability, Statistics, and Random Processes for Engineers",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Stark",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Woods",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry Stark and John Woods. 2011. Probability, Statis- tics, and Random Processes for Engineers. Pearson.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Relational properties as perceptual correlates of phonetic features",
"authors": [
{
"first": "Kenneth",
"middle": [
"N"
],
"last": "Stevens",
"suffix": ""
}
],
"year": 1987,
"venue": "International Conference of Phonetic Sciences",
"volume": "",
"issue": "",
"pages": "352--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth N. Stevens. 1987. Relational properties as per- ceptual correlates of phonetic features. In Interna- tional Conference of Phonetic Sciences, pages 352- 355.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On the quantal nature of speech",
"authors": [
{
"first": "Kenneth",
"middle": [
"N"
],
"last": "Stevens",
"suffix": ""
}
],
"year": 1989,
"venue": "Journal of Phonetics",
"volume": "17",
"issue": "",
"pages": "3--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth N. Stevens. 1989. On the quantal nature of speech. Journal of Phonetics, 17:3-45.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Cloze procedure: a new tool for measuring readability",
"authors": [
{
"first": "Wilson",
"middle": [
"L"
],
"last": "Taylor",
"suffix": ""
}
],
"year": 1953,
"venue": "Journalism and Mass Communication Quarterly",
"volume": "30",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson L. Taylor. 1953. Cloze procedure: a new tool for measuring readability. Journalism and Mass Communication Quarterly, 30(4):415.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The complexity of computing the permanent",
"authors": [
{
"first": "Leslie",
"middle": [
"G"
],
"last": "Valiant",
"suffix": ""
}
],
"year": 1979,
"venue": "Theoretical Computer Science",
"volume": "8",
"issue": "2",
"pages": "189--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie G. Valiant. 1979. The complexity of comput- ing the permanent. Theoretical Computer Science, 8(2):189-201.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficient sampling for bipartite matching problems",
"authors": [
{
"first": "Maksims",
"middle": [],
"last": "Volkovs",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1313--1321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maksims Volkovs and Richard S. Zemel. 2012. Effi- cient sampling for bipartite matching problems. In Advances in Neural Information Processing Systems, pages 1313-1321.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Evaluation methods for topic models",
"authors": [
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
}
],
"year": 2009,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "1105--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanna Wallach, Ian Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In International Conference on Ma- chine Learning (ICML), pages 1105-1112.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Example spectrogram of the three English vowels:",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "A graph of v = (F1, F2) in the union of all the training languages' inventories, color-coded by inferred phone (N = 50).",
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Cross-entropy in nats per language (lower is better)",
"num": null
}
}
}
}