{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:35.430255Z" }, "title": "An Automatic Vowel Space Generator for Language Learners' Pronunciation Acquisition and Correction", "authors": [ { "first": "Xinyuan", "middle": [], "last": "Chao", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": { "country": "Australia" } }, "email": "" }, { "first": "Charbel", "middle": [], "last": "El-Khaissi", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": { "country": "Australia" } }, "email": "charbel.el-khaissi@anu.edu.au" }, { "first": "Nicholas", "middle": [], "last": "Kuo", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": { "country": "Australia" } }, "email": "nicholas.kuo@anu.edu.au" }, { "first": "Priscilla", "middle": [], "last": "Kan", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": { "country": "Australia" } }, "email": "priscilla.kanjohn@anu.edu.au" }, { "first": "Hanna", "middle": [], "last": "Suominen", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Australian National University", "location": { "country": "Australia" } }, "email": "hanna.suominen@anu.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Speech visualisations are known to help language learners to acquire correct pronunciation and promote a better study experience. We present a two-step approach based on two established techniques to display tongue tip movements of an acoustic speech signal on a vowel space plot. First, we use Energy Entropy Ratio to extract vowels; and then, we apply the Linear Predictive Coding root method to estimate Formant 1 and Formant 2. We invited and collected acoustic data from one Modern Standard Arabic (MSA) lecturer and four MSA students. Our proof of concept was able to reflect differences between the tongue tip movements in a native MSA speaker to those of a MSA language learner at a vocabulary level. This paper addresses principle methods for generating features that reflect bio-physiological features of speech and thus, facilitates an approach that can be generally adapted to languages other than MSA.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Speech visualisations are known to help language learners to acquire correct pronunciation and promote a better study experience. We present a two-step approach based on two established techniques to display tongue tip movements of an acoustic speech signal on a vowel space plot. First, we use Energy Entropy Ratio to extract vowels; and then, we apply the Linear Predictive Coding root method to estimate Formant 1 and Formant 2. We invited and collected acoustic data from one Modern Standard Arabic (MSA) lecturer and four MSA students. Our proof of concept was able to reflect differences between the tongue tip movements in a native MSA speaker to those of a MSA language learner at a vocabulary level. This paper addresses principle methods for generating features that reflect bio-physiological features of speech and thus, facilitates an approach that can be generally adapted to languages other than MSA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Second language (L2) learners have difficulties in pronouncing words as well as native speakers (Burgess and Spencer, 2000) which can create inconveniences in social interactions (Derwing and Munro, 2005) . Difficulty in providing pronunciation instructions by language teachers add extra challenges on L2 pronunciation training and corrections (Breitkreutz et al., 2001) .", "cite_spans": [ { "start": 96, "end": 123, "text": "(Burgess and Spencer, 2000)", "ref_id": "BIBREF3" }, { "start": 179, "end": 204, "text": "(Derwing and Munro, 2005)", "ref_id": "BIBREF5" }, { "start": 345, "end": 371, "text": "(Breitkreutz et al., 2001)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One solution to assist pronunciation acquisition is through the adoption of educational software applications (Levis, 2007) . A well-designed language educational software can provide straightforward guidance to correct L2 pronunciation through multiple information sources. One instance of auxiliary systems is Pronunciation Learning Aid (PLA), which supports language students towards nativelike pronunciation in a target language (Fudholi and Suominen, 2018) . PLA achieves this via evaluating students' produced speech to reflect their pronunciation status. Another instance of auxiliary systems is visual cues, which serves as a friendly and accessible form of feedback to language students (Yoshida, 2018) .", "cite_spans": [ { "start": 110, "end": 123, "text": "(Levis, 2007)", "ref_id": "BIBREF14" }, { "start": 433, "end": 461, "text": "(Fudholi and Suominen, 2018)", "ref_id": "BIBREF8" }, { "start": 696, "end": 711, "text": "(Yoshida, 2018)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Through combining language lecturers' teaching with auxiliary systems, our aim is to assist students in both a classroom setting and in their individual practices. We present a prototype system that displays visual feedback on tongue movements to assist language learners to acquire correct pronunciation in the process of L2 studying. We have adopted a human-centred approach for the development of the system using a design-oriented perspective through applying a methodology that draws from Design Science Research (DSR) (Hevner et al., 2004) and Design Thinking (DT) (Plattner et al., 2009) . Unlike machine learning methods, which train deep neural networks to predict articulatory movements (Yu et al., 2018 ), our proposed system uses vowel space plots based on bio-physiological features to help visualise tongue movements.", "cite_spans": [ { "start": 524, "end": 545, "text": "(Hevner et al., 2004)", "ref_id": "BIBREF9" }, { "start": 571, "end": 594, "text": "(Plattner et al., 2009)", "ref_id": "BIBREF22" }, { "start": 697, "end": 713, "text": "(Yu et al., 2018", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this present work, we introduce a versatile prototype of our vowel space plot generator to address these challenges for students primarily learning MSA. Our design aims to allow L2 beginner learners to quickly visualise their status of pronunciation compared to those by their language teachers. We provide a reference vowel space plot adjacent to the students' own plots to reflect clear differences to support self-corrections. The envisioned applicability ranges from in-class activities to provide immediate and personalised suggestion to remote learning where in both cases glossary files are preuploaded by teachers or textbook publishers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditional acoustic plots, such as waveforms, spectrograms, and other feature plots are applied to vi-sualise speech signals and can provide sufficient information to phoneticians, expert scientists, and engineers (Fouz-Gonz\u00e1lez, 2015) . However, these methods fall short in providing straightforward suggestions for improving language students' pronunciation or otherwise lack an intuitive and userfriendly graphic user interface (Neri et al., 2002) . A study proposed by Dibra et al. (2014) adopted the combination of waveform and highlighting syllables to visualise pronunciation in ESL studying shows using acoustic plots to support pronunciation acquisition is an implementable method.", "cite_spans": [ { "start": 215, "end": 236, "text": "(Fouz-Gonz\u00e1lez, 2015)", "ref_id": "BIBREF7" }, { "start": 432, "end": 451, "text": "(Neri et al., 2002)", "ref_id": "BIBREF18" }, { "start": 474, "end": 493, "text": "Dibra et al. (2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Different from acoustic plots, another thinking of pronunciation visualisation was considered based on people's bio-physiological features. A pioneer study with this idea was introduced by Tye- Murray et al. (1993) , in which they discussed the effect of increasing the amount of visible articulatory information, such as non-visible articulatory gestures, on speech comprehension. With the improvement of equipment, Ultrasound imaging, Magnetic Resonance Imaging (MRI), and Elec-troMagnetic Articulography (EMA) can be alternative approaches to visualise the movement of articulators, and several study cases on pronunciation visualisation were implemented by Stone (2005) , Narayanan et al. (2004) , and Katz and Mehta (2015). However, these approaches are still difficult to be implemented in daily language studying since relevant equipment are often not available for in-class activities and self-learning, and generated images and videos are hard to be understood by ordinary learners.", "cite_spans": [ { "start": 194, "end": 214, "text": "Murray et al. (1993)", "ref_id": "BIBREF30" }, { "start": 661, "end": 673, "text": "Stone (2005)", "ref_id": "BIBREF28" }, { "start": 676, "end": 699, "text": "Narayanan et al. (2004)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Enlightened by imaging the movement of articulators, the idea of talking head, which is using 3D mesh model to display of both the appearance articulators and internal articulators, was introduced. Some of the fundamental works of talking head were completed by Odisio et al. (2004) , and Serrurier and Badin (2008) . With the techniques of articulatory movement prediction, such as Gaussian Mixture Model (GMM) (Toda et al., 2008) , Hidden Markov model (HMM) (Ling et al., 2010) , and popular deep learning approach (Yu et al., 2019) . Although talking head is developing swiftly, the research about performance of talking head for pronunciation training is still insufficient.", "cite_spans": [ { "start": 262, "end": 282, "text": "Odisio et al. (2004)", "ref_id": "BIBREF19" }, { "start": 303, "end": 315, "text": "Badin (2008)", "ref_id": "BIBREF24" }, { "start": 412, "end": 431, "text": "(Toda et al., 2008)", "ref_id": "BIBREF29" }, { "start": 460, "end": 479, "text": "(Ling et al., 2010)", "ref_id": "BIBREF16" }, { "start": 517, "end": 534, "text": "(Yu et al., 2019)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The place and manner of articulation are well established variables in the study of speech production and perception (e.g. Badin et al., 2010) . Early research has already realised the potential of using vowel space plots to achieve pronunciation visualisation, such as the studies by Paganus et al. (2006) and Iribe et al. (2012) . These studies indicate that for language learners, vowel space plots are easy-to-understand, straightforward, and provide the necessary information for understanding their own tongue placement and movement. Therefore, vowel space plots are considered a useful tool for language learners to practice and correct their pronunciation relative to other pronunciation correction tools, such as ultrasound visual feedback or more traditional pedadogical methods like explicit correction and repetition.", "cite_spans": [ { "start": 123, "end": 142, "text": "Badin et al., 2010)", "ref_id": "BIBREF0" }, { "start": 285, "end": 306, "text": "Paganus et al. (2006)", "ref_id": "BIBREF20" }, { "start": 311, "end": 330, "text": "Iribe et al. (2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To visualise the tongue movement based on students' pronunciation practice, our proposed system needs to receive students' pronunciation audio signal as its input. After the process of vowel detection, vowel extraction, and formant estimation, the system can automatically generate the corresponding vowel space plot as its output. In this section, we will introduce how engineering and linguistics insights inspired our proposed method, and the details of audio signal processing procedures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Approach", "sec_num": "3" }, { "text": "To find a reliable solution for language students on the challenges about pronunciation acquisition, we adopted a design-based approach and implemented a human-centred approach by using the Design Thinking framework (Plattner et al., 2009) to find the students' needs in terms of pronunciation practice and transform these into requirements. In the Empathy and Define phases of DT, we defined our research question as \"Finding an implementable and friendly approach for language learners to help them practice their pronunciation\". After this, we participated in an MSA tutorial and observed students' behaviours during the process of pronunciation acquisition. Finally, we generated an online questionnaire for students which asks their inclass pronunciation training experience and their study preferences. The details of this survey were introduced in the thesis by Chao (2019) .", "cite_spans": [ { "start": 216, "end": 239, "text": "(Plattner et al., 2009)", "ref_id": "BIBREF22" }, { "start": 869, "end": 880, "text": "Chao (2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Design Methodology", "sec_num": "3.1" }, { "text": "Based on the observation of MSA tutorial, we found that students feel comfortable to interact with other people (lecturer or classmates) during pronunciation process. One advantage for interaction is other people can provide feedback on students' pronunciation. Another finding from observation is the process of pronunciation acquisition can be seen as a process of imitation. Students need a gold-standard, such as teachers' pronunciation, as a reference to acquire new pronunciation and correct mispronunciation. The survey gives us some insights into students preferences about pronunciation study pattern. One of the most important insight is that students are interested in multi-source feedback of pronunciation training. For ordinary pronunciation, training students can only receive auditory information of pronunciation. Therefore, if a straightforward and easy-understanding visual feedback can be adopted in our proposed method, students will have a better experience and higher efficiency on pronunciation training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Methodology", "sec_num": "3.1" }, { "text": "The DT Empathy and Define phases gave us the insight that an ideal auxiliary pronunciation system should interact with learners, provide gold-standard pronunciation reference, and display reliable visual feedback to learners. The insight gained led to ideation discussions leading to the selection of vowel space plots as visualisation tool. We augmented the use of DT with the DSR approach, in the manner of John et al. 2020's study, to guide the development of our the artefact generated from our insights. Using the DSR method introduced by Peffers et al. 2007, we (1) identified our research question based on a research project which is about assisting new language learner on pronunciation acquisition with potential educational softwares, (2) defined our solution according to our observation and survey, (3) designed and developed our prototype of vowel space plot generator, (4) demonstrated our prototype to MSA lecturers and students, (5) and evaluated the prototype's performance. The DT and DSR process underpin all our methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Design Methodology", "sec_num": "3.1" }, { "text": "Our proposed prototype uses vowel space plots as a tool to visualise the acoustic input. This visualisation then forms the basis for subsequent feedback on pronunciation features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vowel Space Plot", "sec_num": "3.2" }, { "text": "A vowel space plot is generated by plotting vowel formant values on a graph that approximates the human vocal tract (Figures 1(a) and 1(b)). F1 and F2 vowel formant values correlate with the position of the tongue during articulation (Lieberman and Blumstein, 1988) . Specifically, F1 is associated with the height of the tongue body (tongue height) and plotted along the vertical axis, while its The correlation between formant values and the tongue's height and placement is referred to as the formant-articulation relationship (Lee et al., 2015). These F1-F2 formant values can be rendered as x-y coordinates on a 2D plot to visualise the relative height and placement of the tongue in the oral cavity during articulation. When visualised alongside the tongue position of a native speaker's pronunciation, users can then see the position of their tongue relative to a standard reference or benchmark of their choice, such as an L2 teacher or native speaker. This visualisation supports pronunciation feedback and correction as users could then rectify the placement and/or height of their tongue during articulation to more closely align with its position in an equivalent native-like pronunciation.", "cite_spans": [ { "start": 234, "end": 265, "text": "(Lieberman and Blumstein, 1988)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 116, "end": 129, "text": "(Figures 1(a)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Vowel Space Plot", "sec_num": "3.2" }, { "text": "To extract vowels from input speech signal, first, we calculate relevant energy criteria and find speech segments. Once speech segments were confirmed, we then use defined thresholds and detect vowels from these speech segments. This section will introduce the energy criteria and the thresholds we adopted in our practice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "Before detecting vowels in a speech signal, detrending and speech-background discrimination are two necessary steps of pre-processing. These steps ensure that only the correct speech information from the original signal is extracted, while other possible noise is ignored. In this way, the prototype minimises the possibility of including irrelevant signals during the feature extraction process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "Our prototype adopted the spectral subtraction algorithm to achieve speech-background discrimination, as first introduced by Boll (1979) . And the detrending can be achieved by the classic least squares method. Our approach used Energy Entropy Ratio (EER), which is a calculated feature from input signal, as the criteria to find vowels from input speech signal. The EER can be calculated as following steps.", "cite_spans": [ { "start": 125, "end": 136, "text": "Boll (1979)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "The spectral entropy (SE) of a signal describes its spectral power distribution (Shen et al., 1998) . SE treats the signal's normalised power distribution within the frequency domain as a probability distribution and calculates its Shannon entropy. To demonstrate the probability distribution of a signal, let a sampled time-domain speech signal be x(n), where the ith frame of x(n) is x i (k) and the mth of the power spectrum Y i (m) is the Discrete Fourier Transformation (DFT) of x i (k). If N is the length of Fast Fourier Transformation (FFT), the probability distribution P i (m) of the signal can be then expressed as", "cite_spans": [ { "start": 80, "end": 99, "text": "(Shen et al., 1998)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p i (m) = Y i (m) N/2 l=0 Y i (l) .", "eq_num": "(1)" } ], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "The definition of short-time spectral entropy for each frame of the signal can be further shown as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H i = \u2212 N/2 k=0 p i (k) log p i (k).", "eq_num": "(2)" } ], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "The spectral entropy reflects the disorder or randomness of a signal. The distribution of normalised spectral probability for noise is even, which makes the spectral entropy value of noise great. Due to the presence of formants in the spectrum of signals in human speech, the distribution of normalised spectral probability is uneven, which makes the spectral entropy value small. This phenomenon can be used with speech-background discrimination to find out endpoints of speech segments. In its practical application, SE is robust under the influence of noise. But spectral entropy cannot be applied for signals with a low signal-tonoise ratio (SNR) because when SNR decreases, the time-domain plot of spectral entropy will keep the original shape, but with a smaller amplitude. This makes SE insensitive to distinguishing speech segments from background noise. To provide a more reliable method of detecting the beginning and end of speech intervals, we introduce where E i is the energy of the i th frame of a speech signal, and H i is the corresponding SE. Speech segments will have larger energy and smaller SE than silent segments. A division of these two shortterm factors makes the difference between speech segments and silent segments more obvious. The first threshold T 1 was implemented as the criterion to judge if the segment contains speech or not. The value of T 1 can be adjusted, and in our case we chose T 1 = 0.1 which performs well. Thus, segments with an energy entropy ratio larger than T 1 were classified as speech segments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "EER i = 1 + |E i /H i |,", "eq_num": "(3)" } ], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "In each speech segment that is extracted, the maximum energy entropy ratio, E max , and scale factor r 2 , were used to set another threshold T 2 for detecting vowel segments:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T 2 = r 2 E max .", "eq_num": "(4)" } ], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "Since different speech segments may have a different threshold T 2 , segments with an energy entropy ratio larger than T 2 were used to detect vowels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "In an example visualisation of vowel detection and segmentation (Figure 2 ), three vowel phonemes -/a/, /i/, and /u/ -are contained in the speech signal. The black dashed horizontal lines show the threshold value T 1 = 0.1 for speech segment detection, while the solid orange lines show the detected speech segments within the speech signal. Similarly, the black vertical lines in bold indicate a dynamic threshold value T 2 for vowel detection across different speech segments, while the blue dashed lines display the vowel segments.", "cite_spans": [], "ref_spans": [ { "start": 64, "end": 73, "text": "(Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Vowel Detection and Perception", "sec_num": "3.3" }, { "text": "Formant value estimation is the next task after the detection of vowel segments from input speech signals. Our prototype adopted the Linear Predictive Coding (LPC) root method to estimate the F1 and F2 formant values for vowels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formant Estimation", "sec_num": "3.4" }, { "text": "A common pre-processing step for linear predictive coding is pre-emphasis (highpass) filtering. We apply a straightforward first-order highpass filter to complete this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formant Estimation", "sec_num": "3.4" }, { "text": "A simplified speech production model, which we adopted in our work is represented in Figure 3 following Rabiner and Schafer (2010) . As shown in Figure 3, s[n] is the output of the speech production system, u[n] is the excitation from the throat, G is a gain parameter and H(z) is a vocal tract system function. Let us consider the transfer function of H(z) as an Auto-Regression (AR) model", "cite_spans": [ { "start": 104, "end": 130, "text": "Rabiner and Schafer (2010)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 3", "ref_id": null }, { "start": 145, "end": 159, "text": "Figure 3, s[n]", "ref_id": null } ], "eq_spans": [], "section": "Formant Estimation", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H(z) = G A(z) = G 1 \u2212 p k=1 a k z \u2212k", "eq_num": "(5)" } ], "section": "Formant Estimation", "sec_num": "3.4" }, { "text": "where A(z) is the prediction error filter, which is used in the LPC root method below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formant Estimation", "sec_num": "3.4" }, { "text": "The polynomial coefficient decomposition of prediction error filter A(z) can be used to estimate the centre of formants and their bandwidth. This method is known as the LPC root method, which was first introduced by Snell and Milinazzo (1993) . Notably, the roots of A(z) are mostly complex conjugate paired roots. Let z i = r i e j\u03b8 i be any value of a complex root of A(z), where its conjugate z * i = r i e \u2212i\u03b8 i is one of the roots of A(z). Further, if F i is the formant frequency corresponding to z i , and B i is the bandwidth at 3dB, then we have the relationships 2\u03c0T F i = \u03b8 i and e \u2212B i \u03c0T = r i , where T is sampling period. Their solutions are F i = \u03b8 i /(2\u03c0T ) and B i = \u2212 ln r i /\u03c0T .", "cite_spans": [ { "start": 216, "end": 242, "text": "Snell and Milinazzo (1993)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Formant Estimation", "sec_num": "3.4" }, { "text": "Since the order p of prediction error filter is set in advance, the pair number of complex conjugate Figure 3 : A simplified model of speech production roots will be up to p/2. This makes it straightforward to find which pole belongs to which formant, since extra poles with a bandwidth larger than a formant's bandwidth may be conveniently excluded.", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 109, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Formant Estimation", "sec_num": "3.4" }, { "text": "We conducted two experiments to evaluate the performance of our prototype. First, we invited a native Arabic speaker who is a Modern Standard Arabic (MSA) lecturer at The Australian National University (ANU) to provide a glossary of MSA lexicon and their corresponding utterances. These utterances constituted the gold-standard or target pronunciation for users. Then, we invited four MSA language students to use our prototype by pronouncing four MSA words. For each lexical item pronounced, the articulation was visualised on a vowel space plot so users can compare their pronunciation alongside the native-like, target pronunciation of their lecturer. Following this visual comparison, users were prompted to pronounce the same word again.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Evaluation Experiment", "sec_num": "4" }, { "text": "In the experiments, we want to verify the feasibility and accessibility of our prototype. The feasibility of our prototype was determined by whether the interpretation of the comparison plots in the first instance supported improved pronunciation of the same word in subsequent iterations. And the accessibility refers to whether our prototype can provide implementable and correct feedback for learners to visualise their pronunciation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Evaluation Experiment", "sec_num": "4" }, { "text": "Ethical Approval (2018/520) was obtained from the Human Research Ethics Committee of The Australian National University. Each study participant provided written informed consent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Evaluation Experiment", "sec_num": "4" }, { "text": "The functionality of the prototype, including speech detection, vowel segmentation and plot generation, was first verified by using a series of acoustic signals as input to observe the accuracy of the output vowel space plot. The MSA lecturer's pronunciation of MSA lexicon was used here to test the veracity of the prototype output. The MSA dataset comprised of ten lexical items 1 and their corresponding pronunciation, henceforth referred to as the \"standard reference\" (see Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 478, "end": 485, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Feasibility Test", "sec_num": "4.1" }, { "text": "For each vocabulary item and corresponding audio input, we observed the vowel space plot gen- erated by our prototype. The accuracy and accessibility of our prototype's speech and vowel detection functionality was determined by its ability to correctly visualise tongue positioning for each vowel in a word. This was determined based on a comparison with statistical averages of formant values for the same vowel. We use a Sony Xperia Z5 mobile phone to collect the utterance of glossary from the MSA lecturer. The utterances were recorded as individual mp3 files which can be used as input of our prototype. Each mp3 file contains one MSA vocabulary in the glossary. These mp3 files were recorded in the lecturer's office to reduce background noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feasibility Test", "sec_num": "4.1" }, { "text": "The verification of our prototype's functionality alone is insufficient to prove that the prototype can assist in providing valuable corrective feedback to users. Therefore, we invited two male students and two female students who were enrolled in a beginner MSA course (ARAB1003) at ANU to voluntarily participate in our accessibility test The success of our prototype's feedback function was determined by whether the language learners can interpret their pronunciation on a vowel space plot against the standard reference in order to produce a more native-like pronunciation for the same word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accessibility Test", "sec_num": "4.2" }, { "text": "Volunteers were aged between 19 and 22 and had completed an introductory MSA course (ARAB1002), which meant they had basic knowledge of MSA and were familiar with its alphabet and phonetic inventory. Four lexical items from the glossary in the standard reference were selected as test items which shown in Table 2 for the volunteers to pronounce. Volunteers pronounced each of the four vocabulary items independently, which were recorded respectively as audio files. These files were processed by our prototype and the corresponding vowel space plots were generated to visualise their pronunciation for each word. Then, their vowel space plots were compared to the corresponding vowel space plot of the standard reference. Participants were advised to use this comparison plot as the basis for their pronunciation feedback prior to repeating the pronunciation of the word. Then, participants pronounced the word a second time and the generated plot was once again compared to the standard reference. This time, the comparison assessed whether the participant's articulation of the vowel was more closely aligned to the standard reference compared to the first pronunciation. In other words, the second iteration of pronunciation allowed for an assessment of whether our prototype provided valuable visualisation information to participants, and whether it helped them immediately correct and improve their pronunciation relative to the standard reference.", "cite_spans": [], "ref_spans": [ { "start": 306, "end": 313, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Accessibility Test", "sec_num": "4.2" }, { "text": "We participated in one of the MSA course tutorials and were keen to see the quality of acoustic data, which were collected from a noisy circumstance, like a classroom. The collecting device was a MacBook Pro 2017. We wrote a Matlab recorder function with GUI to collect the utterance provided by volunteers who were from this tutorial. The utterance were collected as individual wav files and each file contained one word from volunteers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Accessibility Test", "sec_num": "4.2" }, { "text": "We used collected speech signals to test the feasibility and accessibility of our prototype. To test the feasibility, we fed the standard references to our prototype and verify whether the output vowel space plot can reflect the correct tongue motion of the corresponding word. As for accessibility, we used the student test data and generated the vowel space plot, and then found corresponding words from a standard reference and compare these two vowel space plots. An ideal result is the student test (a) Vowel segmentation of standard reference \"soap\" (b) Vowel space plot of standard reference \"soap\" with /\u0101/ and /\u016b/ two vowels Figure 4 : The waveform, energy-entropy ratio, and vowel space plot for standard reference word \"soap\" (provided by a MSA teacher) data can reflect the student's tongue motion, and the student can find how to improve the pronunciation by compare these two vowel space plots. With the vowel space plots of the same words from student test data and standard reference, we compared the corresponding plots to see if the corresponding plots and if the vowel space plots can provide useful feedback on pronunciation correction. In this paper, we display the MSA word \"soap\" ( , /s .\u0101 b\u016bn/) as an example of our results.", "cite_spans": [], "ref_spans": [ { "start": 634, "end": 642, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5" }, { "text": "To test the feasibility of our prototype, we picked one vocabulary item (the word \"soap\") from standard reference and verify whether the output vowel space plot can reflect the tongue motion. The waveform, energy-entropy ratio, and vowel space plot for standard reference word \"soap\" (Figure 4) .", "cite_spans": [], "ref_spans": [ { "start": 284, "end": 294, "text": "(Figure 4)", "ref_id": null } ], "eq_spans": [], "section": "Feasibility", "sec_num": "5.1" }, { "text": "From Figure 4 (a), we found two voice segments between solid orange lines that were recognised from the input speech signal, and the two voice segments, which contained one vowel between dash blue lines for each. In Figure 4(b) , the two vowels of /\u0101/ and /\u016b/ were mapped in the vowel space. This vowel space plot was made available to the users so they can get familiar with their tongue position in the oral cavity and use this visual feedback towards pronouncing the word \"soap\" correctly ( Figure 5 ).", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 13, "text": "Figure 4", "ref_id": null }, { "start": 216, "end": 227, "text": "Figure 4(b)", "ref_id": null }, { "start": 494, "end": 502, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Feasibility", "sec_num": "5.1" }, { "text": "To test the accessibility of our prototype, we compared the vowel space plot of standard reference and the vowel space plot of student test data. We continue to use the word \"soap\" here as an example. Figures below show the results of MSA vocabulary \"soap\" pronounced by the four anonymous students. Students will see two vowel space plot from the Figure 6 shows the overlay vowel space plot of standard reference (blue crosses) and student1's pronunciation practice (red crosses). Since the key information from vowel space plot is the trend of tongue movement, it is not necessary to compare the standard reference and students' pronunciation on the same vowel space plot. From Figure 7 , stu-dent1's tongue should be drawn back instead of moving it to the front of the oral cavity. The vertical down-up movement of the tongue was correct. Figure 8 shows the tongue movement with an arrow. This is more readable and friendly for students to help them perceive their tongue movement. Student2, on the other hand, should focus on the pronunciation of the second vowel /\u016b/. According to Figure 9 , we can see that the pronunciation of \"soap\" pronounced by student2 had the correct tongue motion trajectory when compared with the standard reference of Figure 1 . This student's vertical down-up movement of the tongue was correct. A small defect for this practice was that there existed an unexpected vowel for the end of this pronunciation practice. For further practice, the advice for student1 targeted pronouncing a clean and neat end of the word \"soap\". Student3, in turn, had the correct tongue motion, and the pronunciation was good as well. However, the starting point of the first vowel /\u0101/ was somewhat higher than its standard reference. Hence, our suggestion for Student3 was to lower the starting position of the word \"soap\". Finally, student4 and student1 made similar mispronunciation: student4 should draw the tongue back instead of moving it forward while pronouncing the second vowel /\u016b/. Besides this mistake, another interesting point worthy of notice was that another unexpected vowel occurred by the end of this speech signal. According to waveform analysis, this vowel was not pronounced by student4 but originated from the background noise due to the data collection during an in-class activity. This meant that the sudden noise from background can still influence the analysis result although our prototype already applied its denoising algorithm to this speech signal. Hence, we made a suggestion to try to adopt a more effective denoising function as the future development of the system to satisfy the requirements from students to practice their pronunciation anywhere, including noisy settings.", "cite_spans": [], "ref_spans": [ { "start": 348, "end": 356, "text": "Figure 6", "ref_id": null }, { "start": 680, "end": 688, "text": "Figure 7", "ref_id": null }, { "start": 842, "end": 850, "text": "Figure 8", "ref_id": "FIGREF4" }, { "start": 1086, "end": 1094, "text": "Figure 9", "ref_id": "FIGREF5" }, { "start": 1250, "end": 1258, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Accessibility", "sec_num": "5.2" }, { "text": "This paper presented the initial proof of concept that used vowel space plots to enhance language learning in second languages. The idea of our prototype was based on our early stage DSR process and MSA language student survey (Chao, 2019) . Our prototype was designed to generate clear visual feedback from speech input, and it was tested to assist the pronunciation of L2 MSA beginners.", "cite_spans": [ { "start": 227, "end": 239, "text": "(Chao, 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Our main contribution is the vowel space plot generator prototype which produces easily understandable visual cues from analysing the biophysiological features of user speech. Our prototype is hence user-friendly for improving language learner pronunciation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "To gain evidence of our prototype being effective on assisting language learners' pronunciation training, we designed an experiment to test at the vocabulary level the feasibility and accessibility of the prototype and invited language students to provide their audio data for experimental use. Also, according to students' feedback, we proposed a series of future developments that are described in the next section. One limitation of our presented work is that there was no re-testing of pronunciation after the students received feedback from the system to check that their pronunciation improved. We plan to deploy re-tests as mentioned in our next stage experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In the future, we aim to build on this current work to verify and quantify the pronunciation improvements gained from each user. This will help us to understand the effectiveness of this current design of the prototype and enable us to select appropriate extensions to enhance L2 learning experiences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "We are currently considering to build a correction subsystem for pronunciation practice. In addition to the existing vowel space plots, we theorise that it would be helpful to construct a system that could directly compare our users' speech to a set of externally stored standard references. This should enable the users to correct their pronunciation with higher precision and efficiency. Such a design could also potentially provide personalised pronunciation assistance via analysing user-specific pronunciation patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "Future iterations also intend to test a much more varied selection of MSA words that capture both short and long vowels in word initial, medial and final positions, as well as the two MSA dipthongs /aw/ (e.g. /d . aw/ 'light') and /aj/ (e.g. /bajt/ 'house') and MSA consonant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "Another potential future direction is to animate the tongue motion. Iribe et al. (2012) showed that such animations could achieve better results than their static counterparts. We expect the animated version of the vowel space plot to display tongue motions while people speak to help users to better conceptualise pronunciation in real-time.", "cite_spans": [ { "start": 68, "end": 87, "text": "Iribe et al. (2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "8 Clarification: MSA Vocabulary Selection", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "The justification for the selection of the above ten words was based on a variety of factors. First, the selected vocabulary items were basic MSA words chosen in consultation with an MSA teacher to ensure students had been explicitly taught or otherwise been exposed to them during the course of their language learning. Second, the selected words were restricted to one-to-three syllabic words only. This restriction ensured that sentence-level factors affecting the articulation of vowels were excluded (e.g. /t/insertion rule in Id .\u0101 fah structures; /s\u0101 ' a/ \"clock\" vs. /s\u0101 ' at jusif/ \"Joseph's clock\"), thus allowing for a straightforward assessment of how the prototype detected speech boundaries and extracted the relevant features from vowel segments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "Finally, the ten words selected captured the three, cardinal MSA vowels: /a/ i/ and /u/. Although these vowels exist in the English phonemic inventory and do not theoretically pose a challenge for English-speaking L2 learners of MSA, when they are considered alongside surrounding MSA consonants then their articulation becomes more difficult, such as in the well-known case of emphatic spreading caused by the presence of pharyngeal or pharyngealised consonants ('emphatics') (e.g. Shosted et al., 2018) .", "cite_spans": [ { "start": 483, "end": 504, "text": "Shosted et al., 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "Refer to MSA Vocabulary Selection (Section 8) on our selection criteria of this list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors express their gratitude to participants and other contributors of this study. Furthermore, we would like to thank our three anonymous ALTA reviewers for their careful comments, which helped us to improve this present work.We would also like to thank Ms Leila Kouatly, a MSA lecturer who works at the Australian National University (ANU) for helping us on the selection of the MSA glossary. She also provided us a series of opportunities to join her classes and tutorials. We acquired many valuable observations on her pedagogical methods and skills. Her activity in promoting our study ensured that students actively participated in our student experience survey and preliminary evaluation experiments.Moreover, we thank Dr Emmaline Louise Lear and Mr Frederick Chow. Dr Lear helped us to acquire ethic approval for our study and provided us inspirations from an educator's perspective. Mr Chow helped us on communication with ANU Centre for Arab and Islamic Studies which is crucial for our study and commented on engineering details of our project. They also provided insightful suggestions for an early presentation for this study as examiners. We would like to express our sincere appreciation for their help and remarkable work.Finally, we acknowledge the funding and support by Australian Government Research Training Program Scholarships and ANU for the first three authors' higher degree research studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Can you 'read' tongue movements? evaluation of the contribution of tongue display to speech understanding", "authors": [ { "first": "Pierre", "middle": [], "last": "Badin", "suffix": "" }, { "first": "Yuliya", "middle": [], "last": "Tarabalka", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Elisei", "suffix": "" }, { "first": "G\u00e9rard", "middle": [], "last": "Bailly", "suffix": "" } ], "year": 2010, "venue": "Speech Communication", "volume": "52", "issue": "", "pages": "493--503", "other_ids": { "DOI": [ "10.1016/j.specom.2010.03.002" ] }, "num": null, "urls": [], "raw_text": "Pierre Badin, Yuliya Tarabalka, Fr\u00e9d\u00e9ric Elisei, and G\u00e9rard Bailly. 2010. Can you 'read' tongue move- ments? evaluation of the contribution of tongue dis- play to speech understanding. Speech Communica- tion, 52:493-503.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Suppression of acoustic noise in speech using spectral subtraction", "authors": [ { "first": "Steven", "middle": [], "last": "Boll", "suffix": "" } ], "year": 1979, "venue": "IEEE Transactions on Acoustics, Speech, and Signal Processing", "volume": "27", "issue": "2", "pages": "113--120", "other_ids": { "DOI": [ "10.1109/TASSP.1979.1163209" ] }, "num": null, "urls": [], "raw_text": "Steven Boll. 1979. Suppression of acoustic noise in speech using spectral subtraction. IEEE Transac- tions on Acoustics, Speech, and Signal Processing, 27(2):113-120.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Pronunciation teaching practices in canada", "authors": [ { "first": "Judy", "middle": [], "last": "Breitkreutz", "suffix": "" }, { "first": "M", "middle": [], "last": "Tracey", "suffix": "" }, { "first": "Marian", "middle": [ "J" ], "last": "Derwing", "suffix": "" }, { "first": "", "middle": [], "last": "Rossiter", "suffix": "" } ], "year": 2001, "venue": "TESL Canada journal", "volume": "", "issue": "", "pages": "51--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judy Breitkreutz, Tracey M Derwing, and Marian J Rossiter. 2001. Pronunciation teaching practices in canada. TESL Canada journal, pages 51-61.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Phonology and pronunciation in integrated language teaching and teacher education", "authors": [ { "first": "John", "middle": [], "last": "Burgess", "suffix": "" }, { "first": "Sheila", "middle": [], "last": "Spencer", "suffix": "" } ], "year": 2000, "venue": "System", "volume": "28", "issue": "2", "pages": "191--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Burgess and Sheila Spencer. 2000. Phonology and pronunciation in integrated language teaching and teacher education. System, 28(2):191-215.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Supporting students' ability to speak a foreign language intelligibly using educational technologies:The case of learning Arabic in the Australian National University", "authors": [ { "first": "Xinyuan", "middle": [], "last": "Chao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyuan Chao. 2019. Supporting students' ability to speak a foreign language intelligibly using educa- tional technologies:The case of learning Arabic in the Australian National University. College of Engi- neering and Computer Science, The Australian Na- tional University, Canberra, ACT, Australia.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Second language accent and pronunciation teaching: A research-based approach", "authors": [ { "first": "M", "middle": [], "last": "Tracey", "suffix": "" }, { "first": "Murray", "middle": [ "J" ], "last": "Derwing", "suffix": "" }, { "first": "", "middle": [], "last": "Munro", "suffix": "" } ], "year": 2005, "venue": "TESOL Quarterly", "volume": "39", "issue": "3", "pages": "379--397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tracey M. Derwing and Murray J. Munro. 2005. Sec- ond language accent and pronunciation teaching: A research-based approach. TESOL Quarterly, 39(3):379-397.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Real-time interactive visualization aiding pronunciation of english as a second language", "authors": [ { "first": "Dorina", "middle": [], "last": "Dibra", "suffix": "" }, { "first": "Nuno", "middle": [], "last": "Otero", "suffix": "" }, { "first": "Oskar", "middle": [], "last": "Pettersson", "suffix": "" } ], "year": 2014, "venue": "IEEE 14th International Conference on Advanced Learning Technologies", "volume": "", "issue": "", "pages": "436--440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dorina Dibra, Nuno Otero, and Oskar Pettersson. 2014. Real-time interactive visualization aiding pronuncia- tion of english as a second language. In 2014 IEEE 14th International Conference on Advanced Learn- ing Technologies, pages 436-440.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Trends and directions in computer-assisted pronunciation training. Investigating English Pronunciation Trends and Directions", "authors": [ { "first": "Jon\u00e1s", "middle": [], "last": "Fouz-Gonz\u00e1lez", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "314--342", "other_ids": { "DOI": [ "10.1057/9781137509437_14" ] }, "num": null, "urls": [], "raw_text": "Jon\u00e1s Fouz-Gonz\u00e1lez. 2015. Trends and directions in computer-assisted pronunciation training. Investi- gating English Pronunciation Trends and Directions, pages 314-342.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The importance of recommender and feedback features in a pronunciation learning aid", "authors": [ { "first": "Dzikri", "middle": [], "last": "Fudholi", "suffix": "" }, { "first": "Hanna", "middle": [], "last": "Suominen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications", "volume": "", "issue": "", "pages": "83--87", "other_ids": { "DOI": [ "10.18653/v1/W18-3711" ] }, "num": null, "urls": [], "raw_text": "Dzikri Fudholi and Hanna Suominen. 2018. The im- portance of recommender and feedback features in a pronunciation learning aid. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 83- 87, Melbourne, Australia. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Design science in information systems research", "authors": [ { "first": "Alan", "middle": [ "R" ], "last": "Hevner", "suffix": "" }, { "first": "Salvatore", "middle": [ "T" ], "last": "March", "suffix": "" }, { "first": "Jinsoo", "middle": [], "last": "Park", "suffix": "" }, { "first": "Sudha", "middle": [], "last": "Ram", "suffix": "" } ], "year": 2004, "venue": "MIS Quarterly", "volume": "28", "issue": "1", "pages": "75--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan R. Hevner, Salvatore T. March, Jinsoo Park, and Sudha Ram. 2004. Design science in information systems research. MIS Quarterly, 28(1):75-105.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Real-time visualization of english pronunciation on an ipa chart based on articulatory feature extraction", "authors": [ { "first": "Yurie", "middle": [], "last": "Iribe", "suffix": "" }, { "first": "Takurou", "middle": [], "last": "Mori", "suffix": "" }, { "first": "Kouichi", "middle": [], "last": "Katsurada", "suffix": "" }, { "first": "Goh", "middle": [], "last": "Kawai", "suffix": "" }, { "first": "Tsuneo", "middle": [], "last": "Nitta", "suffix": "" } ], "year": 2012, "venue": "", "volume": "2", "issue": "", "pages": "1270--1273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yurie Iribe, Takurou Mori, Kouichi Katsurada, Goh Kawai, and Tsuneo Nitta. 2012. Real-time visualiza- tion of english pronunciation on an ipa chart based on articulatory feature extraction. Interspeech 2012, 2:1270-1273.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Designing a visual tool for teaching and learning front-end innovation", "authors": [ { "first": "Priscilla", "middle": [], "last": "Kan John", "suffix": "" }, { "first": "Emmaline", "middle": [], "last": "Lear", "suffix": "" }, { "first": "L'espoir", "middle": [], "last": "Patrick", "suffix": "" }, { "first": "Shirley", "middle": [], "last": "Decosta", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Gregor", "suffix": "" }, { "first": "Ruonan", "middle": [], "last": "Dann", "suffix": "" }, { "first": "", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2020, "venue": "Technology Innovation Management Review", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.22215/timreview/1386" ] }, "num": null, "urls": [], "raw_text": "Priscilla Kan John, Emmaline Lear, Patrick L'Espoir Decosta, Shirley Gregor, Stephen Dann, and Ruonan Sun. 2020. Designing a visual tool for teaching and learning front-end innovation. Technology Innova- tion Management Review, 10.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Visual feedback of tongue movement for novel speech sound learning", "authors": [ { "first": "F", "middle": [], "last": "William", "suffix": "" }, { "first": "Sonya", "middle": [], "last": "Katz", "suffix": "" }, { "first": "", "middle": [], "last": "Mehta", "suffix": "" } ], "year": 2015, "venue": "Frontiers in Human Neuroscience", "volume": "9", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3389/fnhum.2015.00612" ] }, "num": null, "urls": [], "raw_text": "William F. Katz and Sonya Mehta. 2015. Visual feed- back of tongue movement for novel speech sound learning. Frontiers in Human Neuroscience, 9:612.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Relationships between formant frequencies of sustained vowels and tongue contours measured by ultrasonography. American journal of speech-language pathology", "authors": [ { "first": "Jen-Fang", "middle": [], "last": "Shao-Hsuan Lee", "suffix": "" }, { "first": "Yu-Hsiang", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Guo-She", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2015, "venue": "American Speech-Language-Hearing Association", "volume": "24", "issue": "", "pages": "739--749", "other_ids": { "DOI": [ "10.1044/2015_AJSLP-14-0063" ] }, "num": null, "urls": [], "raw_text": "Shao-Hsuan Lee, Jen-Fang Yu, Yu-Hsiang Hsieh, and Guo-She Lee. 2015. Relationships between formant frequencies of sustained vowels and tongue contours measured by ultrasonography. American journal of speech-language pathology / American Speech- Language-Hearing Association, 24:739-749.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Computer technology in teaching and researching pronunciation", "authors": [ { "first": "John", "middle": [], "last": "Levis", "suffix": "" } ], "year": 2007, "venue": "Annual Review of Applied Linguistics", "volume": "27", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Levis. 2007. Computer technology in teaching and researching pronunciation. Annual Review of Applied Linguistics, 27:184.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Speech Physiology, Speech Perception, and Acoustic Phonetics. Cambridge Studies in Speech Science and Communication", "authors": [ { "first": "Philip", "middle": [], "last": "Lieberman", "suffix": "" }, { "first": "Sheila", "middle": [ "E" ], "last": "Blumstein", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1017/CBO9781139165952.004" ] }, "num": null, "urls": [], "raw_text": "Philip Lieberman and Sheila E. Blumstein. 1988. Speech Physiology, Speech Perception, and Acous- tic Phonetics. Cambridge Studies in Speech Science and Communication. Cambridge University Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "An analysis of hmm-based prediction of articulatory movements", "authors": [ { "first": "Zhen-Hua", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Korin", "middle": [], "last": "Richmond", "suffix": "" }, { "first": "Junichi", "middle": [], "last": "Yamagishi", "suffix": "" } ], "year": 2010, "venue": "Speech Communication", "volume": "52", "issue": "10", "pages": "834--846", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen-Hua Ling, Korin Richmond, and Junichi Yamag- ishi. 2010. An analysis of hmm-based prediction of articulatory movements. Speech Communication, 52(10):834-846.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "An approach to real-time magnetic resonance imaging for speech production", "authors": [ { "first": "Shrikanth", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Krishna", "middle": [], "last": "Nayak", "suffix": "" }, { "first": "Sungbok", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Sethy", "suffix": "" }, { "first": "Dani", "middle": [], "last": "Byrd", "suffix": "" } ], "year": 2004, "venue": "The Journal of the Acoustical Society of America", "volume": "115", "issue": "4", "pages": "1771--1776", "other_ids": { "DOI": [ "https://asa.scitation.org/doi/abs/10.1121/1.1652588" ] }, "num": null, "urls": [], "raw_text": "Shrikanth Narayanan, Krishna Nayak, Sungbok Lee, Abhinav Sethy, and Dani Byrd. 2004. An approach to real-time magnetic resonance imaging for speech production. The Journal of the Acoustical Society of America, 115(4):1771-1776.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The pedagogy-technology interface in computer assisted pronunciation training", "authors": [ { "first": "Ambra", "middle": [], "last": "Neri", "suffix": "" }, { "first": "Catia", "middle": [], "last": "Cucchiarini", "suffix": "" }, { "first": "Helmer", "middle": [], "last": "Strik", "suffix": "" }, { "first": "Lou", "middle": [], "last": "Boves", "suffix": "" } ], "year": 2002, "venue": "Computer Assisted Language Learning", "volume": "15", "issue": "5", "pages": "441--467", "other_ids": { "DOI": [ "10.1076/call.15.5.441.13473" ] }, "num": null, "urls": [], "raw_text": "Ambra Neri, Catia Cucchiarini, Helmer Strik, and Lou Boves. 2002. The pedagogy-technology interface in computer assisted pronunciation training. Computer Assisted Language Learning, 15(5):441-467.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Tracking talking faces with shape and appearance models", "authors": [ { "first": "Matthias", "middle": [], "last": "Odisio", "suffix": "" }, { "first": "G\u00e9rard", "middle": [], "last": "Bailly", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Elisei", "suffix": "" } ], "year": 2004, "venue": "Speech Communication", "volume": "44", "issue": "", "pages": "63--82", "other_ids": { "DOI": [ "10.1016/j.specom.2004.10.008" ] }, "num": null, "urls": [], "raw_text": "Matthias Odisio, G\u00e9rard Bailly, and Fr\u00e9d\u00e9ric Elisei. 2004. Tracking talking faces with shape and appear- ance models. Speech Communication, 44:63-82.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The vowel game: Continuous real-time visualization for pronunciation learning with vowel charts", "authors": [ { "first": "Annu", "middle": [], "last": "Paganus", "suffix": "" }, { "first": "Tomi", "middle": [], "last": "Vesa-Petteri Mikkonen", "suffix": "" }, { "first": "Sami", "middle": [], "last": "M\u00e4ntyl\u00e4", "suffix": "" }, { "first": "Jouni", "middle": [], "last": "Nuuttila", "suffix": "" }, { "first": "Olli", "middle": [], "last": "Isoaho", "suffix": "" }, { "first": "Tapio", "middle": [], "last": "Aaltonen", "suffix": "" }, { "first": "", "middle": [], "last": "Salakoski", "suffix": "" } ], "year": 2006, "venue": "Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "696--703", "other_ids": { "DOI": [ "https://link.springer.com/chapter/10.1007/11816508_69#citeas" ] }, "num": null, "urls": [], "raw_text": "Annu Paganus, Vesa-Petteri Mikkonen, Tomi M\u00e4ntyl\u00e4, Sami Nuuttila, Jouni Isoaho, Olli Aaltonen, and Tapio Salakoski. 2006. The vowel game: Continu- ous real-time visualization for pronunciation learn- ing with vowel charts. In Advances in Natural Lan- guage Processing, pages 696-703, Berlin, Heidel- berg. Springer Berlin Heidelberg.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A design science research methodology for information systems research", "authors": [ { "first": "Ken", "middle": [], "last": "Peffers", "suffix": "" }, { "first": "Tuure", "middle": [], "last": "Tuunanen", "suffix": "" }, { "first": "A", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Samir", "middle": [], "last": "Rothenberger", "suffix": "" }, { "first": "", "middle": [], "last": "Chatterjee", "suffix": "" } ], "year": 2007, "venue": "Journal of management information systems", "volume": "24", "issue": "3", "pages": "45--77", "other_ids": { "DOI": [ "https://www.tandfonline.com/doi/abs/10.2753/MIS0742-1222240302" ] }, "num": null, "urls": [], "raw_text": "Ken Peffers, Tuure Tuunanen, Marcus A Rothenberger, and Samir Chatterjee. 2007. A design science research methodology for information systems re- search. Journal of management information systems, 24(3):45-77.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Design-thinking", "authors": [ { "first": "Hasso", "middle": [], "last": "Plattner", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Meinel", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Weinberg", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hasso Plattner, Christoph Meinel, and Ulrich Weinberg. 2009. Design-thinking. Springer.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Theory and Applications of Digital Speech Processing", "authors": [ { "first": "Lawrence", "middle": [], "last": "Rabiner", "suffix": "" }, { "first": "Ronald", "middle": [], "last": "Schafer", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "https://dl.acm.org/doi/book/10.5555/1841670" ] }, "num": null, "urls": [], "raw_text": "Lawrence Rabiner and Ronald Schafer. 2010. Theory and Applications of Digital Speech Processing, 1st edition. Prentice Hall Press, Upper Saddle River, NJ, USA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A threedimensional articulatory model of the velum and nasopharyngeal wall based on mri and ct data", "authors": [ { "first": "Antoine", "middle": [], "last": "Serrurier", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Badin", "suffix": "" } ], "year": 2008, "venue": "The Journal of the Acoustical Society of America", "volume": "123", "issue": "", "pages": "2335--55", "other_ids": { "DOI": [ "10.1121/1.2875111" ] }, "num": null, "urls": [], "raw_text": "Antoine Serrurier and Pierre Badin. 2008. A three- dimensional articulatory model of the velum and nasopharyngeal wall based on mri and ct data. The Journal of the Acoustical Society of America, 123:2335-55.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Robust entropy-based endpoint detection for speech recognition in noisy environments", "authors": [ { "first": "Jeih-Weih", "middle": [], "last": "Jia-Lin Shen", "suffix": "" }, { "first": "Lin-Shan", "middle": [], "last": "Hung", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" } ], "year": 1998, "venue": "Fifth international conference on spoken language processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jia-lin Shen, Jeih-weih Hung, and Lin-shan Lee. 1998. Robust entropy-based endpoint detection for speech recognition in noisy environments. In Fifth interna- tional conference on spoken language processing.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Arabic pharyngeal and emphatic consonants, chapter chapter3", "authors": [ { "first": "Maojing", "middle": [], "last": "Ryan K Shosted", "suffix": "" }, { "first": "Zainab", "middle": [], "last": "Fu", "suffix": "" }, { "first": "", "middle": [], "last": "Hermes", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.4324/9781315147062-4" ] }, "num": null, "urls": [], "raw_text": "Ryan K Shosted, Maojing Fu, and Zainab Hermes. 2018. Arabic pharyngeal and emphatic consonants, chapter chapter3. Routledge.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Formant location from lpc analysis data", "authors": [ { "first": "R", "middle": [ "C" ], "last": "Snell", "suffix": "" }, { "first": "F", "middle": [], "last": "Milinazzo", "suffix": "" } ], "year": 1993, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "1", "issue": "2", "pages": "129--134", "other_ids": { "DOI": [ "10.1109/89.222882" ] }, "num": null, "urls": [], "raw_text": "R. C. Snell and F. Milinazzo. 1993. Formant loca- tion from lpc analysis data. IEEE Transactions on Speech and Audio Processing, 1(2):129-134.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A guide to analysing tongue motion from ultrasound images", "authors": [ { "first": "Maureen", "middle": [], "last": "Stone", "suffix": "" } ], "year": 2005, "venue": "Clinical Linguistics & Phonetics", "volume": "19", "issue": "6-7", "pages": "455--501", "other_ids": { "DOI": [ "10.1080/02699200500113558" ], "PMID": [ "16206478" ] }, "num": null, "urls": [], "raw_text": "Maureen Stone. 2005. A guide to analysing tongue mo- tion from ultrasound images. Clinical Linguistics & Phonetics, 19(6-7):455-501. PMID: 16206478.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Statistical mapping between articulatory movements and acoustic spectrum using a gaussian mixture model", "authors": [ { "first": "Tomoki", "middle": [], "last": "Toda", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Keiichi", "middle": [], "last": "Tokuda", "suffix": "" } ], "year": 2008, "venue": "Speech Communication", "volume": "50", "issue": "3", "pages": "215--227", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomoki Toda, Alan W Black, and Keiichi Tokuda. 2008. Statistical mapping between articulatory movements and acoustic spectrum using a gaussian mixture model. Speech Communication, 50(3):215- 227.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Making typically obscured articulatory activity available to speech readers by means of videofluoroscopy", "authors": [ { "first": "Nancy", "middle": [], "last": "Tye-Murray", "suffix": "" }, { "first": "Karen", "middle": [ "Iler" ], "last": "Kirk", "suffix": "" }, { "first": "Lorianne", "middle": [], "last": "Schum", "suffix": "" } ], "year": 1993, "venue": "NCVS Status and Progress Report", "volume": "4", "issue": "", "pages": "41--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Tye-Murray, Karen Iler Kirk, and Lorianne Schum. 1993. Making typically obscured articula- tory activity available to speech readers by means of videofluoroscopy. In NCVS Status and Progress Re- port, volume 4, pages 41-63.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Choosing technology tools to meet pronunciation teaching and learning goals", "authors": [ { "first": "Marla", "middle": [], "last": "Tritch", "suffix": "" }, { "first": "Yoshida", "middle": [], "last": "", "suffix": "" } ], "year": 2018, "venue": "The CATESOL Journal", "volume": "30", "issue": "1", "pages": "195--212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marla Tritch Yoshida. 2018. Choosing technology tools to meet pronunciation teaching and learning goals. The CATESOL Journal, 30(1):195-212.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Synthesizing 3d acoustic-articulatory mapping trajectories: Predicting articulatory movements by longterm recurrent convolutional neural network", "authors": [ { "first": "Lingyun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Ling", "suffix": "" } ], "year": 2018, "venue": "IEEE Visual Communications and Image Processing (VCIP)", "volume": "", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lingyun Yu, Jun Yu, and Qiang Ling. 2018. Syn- thesizing 3d acoustic-articulatory mapping trajecto- ries: Predicting articulatory movements by long- term recurrent convolutional neural network. In 2018 IEEE Visual Communications and Image Pro- cessing (VCIP), pages 1-4.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Bltrcnnbased 3-d articulatory movement prediction: Learning articulatory synchronicity from both text and audio inputs", "authors": [ { "first": "Lingyun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Ling", "suffix": "" } ], "year": 2019, "venue": "IEEE Transactions on Multimedia", "volume": "21", "issue": "7", "pages": "1621--1632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lingyun Yu, Jun Yu, and Qiang Ling. 2019. Bltrcnn- based 3-d articulatory movement prediction: Learn- ing articulatory synchronicity from both text and audio inputs. IEEE Transactions on Multimedia, 21(7):1621-1632.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "(a) An example of vowel space plot which shows the location of different vowels in the vowel space (b) Vowel space plot and oral cavity -the Formant-Articulation Correlation Vowel space plot and oral cavity F2 counterpart is associated with tongue placement in the oral cavity (tongue advancement) and plotted along the horizontal axis.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Vowel detection and segmentation", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "The tongue motion for the MSA word \"soap\" prototype: one shows the standard reference, and another reflects their own pronunciation.", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "The tongue movement (reference and stu-dent1's practice) for the MSA word \"soap\"(a) Standard reference of \"soap\" (b) Vowel space plot of user input-1 \"soap\" with /\u0101/, /\u016b/ The vowel space plot from standard reference and student1 (a) Standard reference of \"soap\" with arrow (b) Vowel space plot of user input-1 \"soap\" with arrow", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "The vowel space plot from standard reference and student1 with arrow", "type_str": "figure", "uris": null }, "FIGREF5": { "num": null, "text": "(a) Standard reference \"soap\" (b) Vowel space plot of user input-2 \"soap\" with /\u0101/, /\u016b/ following wrong trajectory The vowel space plot of standard reference and student2", "type_str": "figure", "uris": null }, "FIGREF6": { "num": null, "text": "(a) Standard reference \"soap\" (b) Vowel space plot of user input-3 \"soap\" with /\u0101/, /\u016b/ Figure 10: The vowel space plots of standard reference and Student3", "type_str": "figure", "uris": null }, "FIGREF7": { "num": null, "text": "(a) Standard reference \"soap\" (b) Vowel space plot of user input-4 \"soap\" The waveform, energy-entropy ratio, and vowel space plot of Student4", "type_str": "figure", "uris": null }, "TABREF1": { "type_str": "table", "num": null, "text": "Ten reference vocabularies", "html": null, "content": "
VocabularyMSATransliteration Vowels
shark/qir\u0161/1
soap/s .\u0101 b\u016bn/2
student(male)/t .\u0101 lib/2
student(female)/t .\u0101 liba/3
" }, "TABREF2": { "type_str": "table", "num": null, "text": "The student test data of four MSA words", "html": null, "content": "" } } } }