{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:31:37.971422Z" }, "title": "Talr\u00f3mur: A large Icelandic TTS corpus", "authors": [ { "first": "Atli", "middle": [ "Thor" ], "last": "Sigurgeirsson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Da\u00f0i", "middle": [], "last": "\u00deorsteinn", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Gunnarsson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Thor", "middle": [], "last": "Gunnar", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Eyd\u00eds", "middle": [], "last": "\u00d6rn\u00f3lfsson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Magn\u00fasd\u00f3tir", "middle": [], "last": "Huld", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Kr", "middle": [], "last": "Ragnhei\u00f0ur", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Stef\u00e1n", "middle": [ "Gunnlaugur" ], "last": "\u00de\u00f3rhallsd\u00f3ttir", "suffix": "", "affiliation": {}, "email": "" }, { "first": "J\u00f3n", "middle": [], "last": "J\u00f3nsson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Gu\u00f0nason Menntavegur", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present Talr\u00f3mur 1 , a large high-quality Text-To-Speech (TTS) corpus for the Icelandic language. This multi-speaker corpus contains recordings from 4 male speakers and 4 female speakers of a wide range in age and speaking style. The corpus consists of 122,417 single utterance recordings equating to approximately 213 hours of voice data. All speakers read from the same script which has a high coverage of possible Icelandic diphones. Manual analysis of 15,956 utterances indicates that the corpus has a reading mistake rate no higher than 0.25%. We additionally present results from subjective evaluations of the different voices with regards to intelligibility, likeability and trustworthiness.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present Talr\u00f3mur 1 , a large high-quality Text-To-Speech (TTS) corpus for the Icelandic language. This multi-speaker corpus contains recordings from 4 male speakers and 4 female speakers of a wide range in age and speaking style. The corpus consists of 122,417 single utterance recordings equating to approximately 213 hours of voice data. All speakers read from the same script which has a high coverage of possible Icelandic diphones. Manual analysis of 15,956 utterances indicates that the corpus has a reading mistake rate no higher than 0.25%. We additionally present results from subjective evaluations of the different voices with regards to intelligibility, likeability and trustworthiness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "All statistical TTS models require some training data to learn the mapping from text to speech. Unit selection TTS models are capable of producing an intelligible voice using less than 2 hours of aligned speech (Conkie, 1999) . HMM-based TTS models can produce somewhat natural-sounding speech using less than 500 utterances (Yoshimura et al., 1999) . The more recent neural end-toend models have reached a considerably higher mean opinion score (MOS) in regard to naturalness. However, they require a much larger training corpus; most require tens of thousands of utterances to converge and reach natural sounding synthesis (Wang et al., 2017) (Ren et al., 2019) . The widely used LJ Speech corpus consists of 13,100 recordings amounting to approximately 24 hours (Ito and Johnson, 2017) . 1 \"Tal\" means speech and \"r\u00f3mur\" means voice.", "cite_spans": [ { "start": 211, "end": 225, "text": "(Conkie, 1999)", "ref_id": "BIBREF1" }, { "start": 325, "end": 349, "text": "(Yoshimura et al., 1999)", "ref_id": "BIBREF19" }, { "start": 625, "end": 644, "text": "(Wang et al., 2017)", "ref_id": "BIBREF16" }, { "start": 645, "end": 663, "text": "(Ren et al., 2019)", "ref_id": "BIBREF10" }, { "start": 765, "end": 788, "text": "(Ito and Johnson, 2017)", "ref_id": "BIBREF4" }, { "start": 791, "end": 792, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To produce high quality synthesised speech with minimal noise, the corpora used for training TTS models are most often captured in a studio under supervision. New approaches have lowered the language-specific expertise needed for high quality TTS but at the cost of requiring larger amounts of training data (Sotelo et al., 2017 ) (Arik et al., 2017 (Wang et al., 2017) (Ren et al., 2019) . The large amount of data needed and the quality of that data limits the ability of many low resource language communities to benefit from these recent advancements in the TTS domain.", "cite_spans": [ { "start": 308, "end": 328, "text": "(Sotelo et al., 2017", "ref_id": "BIBREF13" }, { "start": 329, "end": 349, "text": ") (Arik et al., 2017", "ref_id": "BIBREF0" }, { "start": 350, "end": 369, "text": "(Wang et al., 2017)", "ref_id": "BIBREF16" }, { "start": 370, "end": 388, "text": "(Ren et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The Icelandic language program (ILP) is a 5 year government funded program to make the Icelandic language viable in the digital world (Nikul\u00e1sd\u00f3ttir et al., 2020) . TTS development for Icelandic is a significant part of the ILP ranging from unit selection voices to multi-speaker TTS models. A prerequisite for all TTS projects of the ILP is a large high quality TTS corpus which up to this point has not been available for open use (Nikul\u00e1sd\u00f3ttir et al., 2020) .", "cite_spans": [ { "start": 134, "end": 162, "text": "(Nikul\u00e1sd\u00f3ttir et al., 2020)", "ref_id": null }, { "start": 433, "end": 461, "text": "(Nikul\u00e1sd\u00f3ttir et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous work in spoken language technology for Icelandic has been more focused on speech recognition, both in terms of data acquisition and acoustic modelling (Helgad\u00f3ttir et al., 2017) (Gu\u00f0nason et al., 2012) (Steingr\u00edmsson et al., 2017) (Mollberg et al., 2020) . Since most of that data is found or crowd-sourced data from multiple speakers it is not ideal for speech synthesis where low background noise and high recording quality is important. An Icelandic pronunciation dictionary for TTS exists as well as a limited text normalisation system (Nikul\u00e1sd\u00f3ttir et al., 2018) (Nikul\u00e1sd\u00f3ttir and Gu\u00f0nason, 2019) . To address the lack of high quality Icelandic TTS data, Talr\u00f3mur has been created. Table 1 : Overview of corpus, outlining key statistics and information for each speaker. The \"Name\" column contains pseudonyms for the speakers in the corpus prosody. Voice samples from speaker applicants were analysed and evaluated with this and a subjective evaluation of pleasantness in mind. Each participating speaker got a recording schedule, typically two hours each working day until completion. Dialect diversity is low in Iceland and six main but rather similar regional variants are listed in the Icelandic pronunciation dictionary (Nikul\u00e1sd\u00f3ttir et al., 2018). Speakers A-F all speak in the most frequent standard dialect while speakers G and H speak in the second most frequent regional variant. Speakers A, B and C differ a bit from the rest of the group and their qualities deserve a specific mention.", "cite_spans": [ { "start": 187, "end": 210, "text": "(Gu\u00f0nason et al., 2012)", "ref_id": "BIBREF2" }, { "start": 211, "end": 239, "text": "(Steingr\u00edmsson et al., 2017)", "ref_id": "BIBREF14" }, { "start": 240, "end": 263, "text": "(Mollberg et al., 2020)", "ref_id": "BIBREF6" }, { "start": 597, "end": 612, "text": "Gu\u00f0nason, 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 698, "end": 705, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Speaker A was the first speaker we recorded. At that time the development of the recording client was ongoing and we had limited experience with the studio and equipment. As shown in table 1 that speaker has significantly fewer hours recorded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Speaker B is a 70 year old man with limited eyesight. This speaker often had issues with reading the prompts fluently. This results in unnatural pauses in the middle of sentences that correspond with where the line is split on the screen. We have looked into using silence detection to remove these silences and current results suggest that this task is easily automated. We release the data in the raw format however, without any trimming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Speaker C is a female voice actor with a deep, breathy voice. This speaker's recordings are more similar to audio-book recordings in that they have a more animated speaking style when compared to the other speakers. Technical Details Each speaker reads single sentence prompts from the same reading list. The reading list was de- Figure 1 : Mel-frequency spectrograms of all speakers saying the same phrase: \"\u00c9g, \u00e9g er sko, \u00e9g er ekki sko, alveg viss um \u00feetta\". signed to have a high coverage of diphones in the Icelandic language (Sigurgeirsson et al., 2020) . The prompts were sourced from Risam\u00e1lheild, a large Icelandic text corpus consisting of text from many different types of sources (Steingr\u00edmsson et al., 2018) .", "cite_spans": [ { "start": 531, "end": 559, "text": "(Sigurgeirsson et al., 2020)", "ref_id": "BIBREF11" }, { "start": 692, "end": 720, "text": "(Steingr\u00edmsson et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 330, "end": 338, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recording sessions were carried out in a stu-dio at the national broadcaster of Iceland. After recording the first 2 speakers, the project was moved to a different studio at the national broadcaster due to restrictions caused by the COVID-19 pandemic. The last two speakers reside in northern Iceland and they were therefore recorded in a third studio. The recordings were captured between November 2019 and September 2020.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since the speakers read prompts from the same reading list nearly all sentences in the corpus are spoken by multiple speakers. This makes the corpus ideal for multi-speaker TTS development, prosody transfer, voice conversion and other research domains where the speaker identity and linguistic content have to be disentangled by the TTS model (Skerry-Ryan et al., 2018) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The same recording hardware was used for all recordings. The hardware consisted of an AKG ULS series condenser microphone equipped with a CK-61 cardioid capsule, an SPL channel one pre-amplifier and a Clarett 2Pre sound card. The recordings are captured using a recording client specifically made for this project (Sigurgeirsson et al., 2020) .", "cite_spans": [ { "start": 314, "end": 342, "text": "(Sigurgeirsson et al., 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We store some information about every recording captured, such as how the text appeared on the monitor to the speaker, the session ID and technical information about the recordings. Most recordings are sampled at 48kHz with a 16 bit depth. Some recordings of speakers A and B are sampled at 44.1kHz. All recordings are single channel. We have analysed a portion of the recordings for quality. Of the approximately 122,417 recordings 15,956 recordings have been analysed. Using a proprietary tool, human evaluators are asked to first listen to a single recording and then indicate whether the recording matches the prompt and whether the recording quality is good. We specifically ask the evaluators to indicate whether the volume is either too high, resulting in pops or distortions, or too low making the recording hard to comprehend or whether any other audio flaws are audible in the recording.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Of the recordings analysed 613 were marked as bad or about 3.8%. Only 1.23% of the recordings were indicated to have a mismatch between the prompt and the recording. Upon further inspection it seems that the evaluators marked recordings with untimely silences as prompt mismatches. Most of those are spoken by speaker B as explained in section 2. After a second pass over the evaluations we are confident that a better estimate of prompt mismatches is no more than 0.25%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recording Analysis", "sec_num": "3" }, { "text": "The rate of audio flaws is 2.17% but reviewing the samples in question revealed that a significant portion of these recordings do not have any unwanted artefacts. Upon inspection we believe some of these recordings have a higher than normal volume, making them sound unpleasant when compared to other recordings. This is particularly common for speaker B. The volume of recordings can be too high if the speaker has moved too close to the microphone, the hardware has not been configured correctly or the speaker speaks with more effort than is natural to the speaker. There are however some recordings that do have unwanted artefacts. In most cases this consists of a small pop at a random location in the recording. These pops mostly appear in recordings from speakers A and B and we therefore deduce that the source of this artefact is the hardware configuration in the recording studio for those speakers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recording Analysis", "sec_num": "3" }, { "text": "To gain further information about which voice would be most suitable for general TTS use, we set up a subjective listening experiment with 50 participants. During the listening experiment, the participants listen to a single recording at a time. They are then asked one of three questions 2 : Q1: How easy is it to understand this voice? Q2: How pleasant is this voice? Q3: How trustworthy is this voice? Table 3 : Estimation of speaking rate (SR) and average F0. Pitch was estimated by averaging pitch over voiced segments in the phrase used in figure 1. ProsodyPro was used for pitch tracking (Xu, 2016) .", "cite_spans": [ { "start": 595, "end": 605, "text": "(Xu, 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 405, "end": 412, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Subjective Listening Experiment", "sec_num": "4" }, { "text": "The participants then rate the recording on a scale from 1 to 5, e.g. from very untrustworthy to very trustworthy. Before starting the evaluation participants are made aware that the sentences being spoken should not affect their judgement and that they should focus on the voice itself. Each participant listens to 3 recordings from each speaker for each of the three questions, resulting in 24 evaluations per question and 72 evaluations in total per participant. We used a balanced Latin square experimental design with 24 different recordings tested for each evaluation question (MacKenzie, 2002) . This resulted in 1074 Q1 responses, 1074 Q2 responses and 1088 Q3 responses. The number of responses per utterance ranges from 4 to 8.", "cite_spans": [ { "start": 583, "end": 600, "text": "(MacKenzie, 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Subjective Listening Experiment", "sec_num": "4" }, { "text": "Results from this experiment are shown in table 4. These scores are relative between the 8 speakers since listeners only listen to recordings from the Talr\u00f3mur corpus. Due to the fact that the listening test wasn't anchored, the interpretation of the rating scale varied noticeably between listeners. The results we present here are normalised per listener, and the raw scores are higher, particularly for Q1. Voice G is rated as the most intelligible, voice H as the most likable and most trustworthy, although they didn't score significantly higher than the second highest for each question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subjective Listening Experiment", "sec_num": "4" }, { "text": "In this paper we introduce the Talr\u00f3mur corpus which is the result of the first TTS data acquisition phase of the Icelandic language program. Talr\u00f3mur is a large, high quality speech corpus designed specifically for TTS. The corpus consists of 8 different voices with a wide range in prosodic effect, speaking style and age. Table 4 : Normalised mean opinion score with standard deviation for each speaker and each question. Q1 tested for intelligibility, Q2 for likeability and Q3 for trustworthiness. data in Talr\u00f3mur matches or exceeds that used in many state-of-the-art end-to-end neural TTS models for the English language. A subjective evaluation indicates which voice users are likely to prefer but we believe most of the voices are good candidates for general TTS use. As with other deliverables belonging to the ILP, the data will be published under open licenses to encourage wide use and adoption of the data. The data has been made available through the CLARIN project 3 .", "cite_spans": [], "ref_spans": [ { "start": 325, "end": 332, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Summary and Future Work", "sec_num": "5" }, { "text": "This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannar\u00f3mur 4 , is funded by the Icelandic Ministry of Education, Science and Culture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6" }, { "text": "In Icelandic: Q1: Hversu au\u00f0skiljanleg \u00feykir \u00fe\u00e9r \u00feessi r\u00f6dd? Q2: Hversu vi\u00f0kunnanleg \u00feykir \u00fe\u00e9r \u00feessi r\u00f6dd? Q3: Hversu traustver\u00f0ug \u00feykir \u00fe\u00e9r \u00feessi r\u00f6dd?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://repository.clarin.is/repository/xmlui/handle/20.500.12537/104 4 https://almannaromur.is/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Deep voice: Real-time neural text-tospeech", "authors": [ { "first": "Mike", "middle": [], "last": "Sercan O Arik", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Chrzanowski", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Coates", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Diamos", "suffix": "" }, { "first": "Yongguo", "middle": [], "last": "Gibiansky", "suffix": "" }, { "first": "Xian", "middle": [], "last": "Kang", "suffix": "" }, { "first": "John", "middle": [], "last": "Li", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Raiman", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sercan O Arik, Mike Chrzanowski, Adam Coates, Gre- gory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, et al. 2017. Deep voice: Real-time neural text-to- speech.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Robust unit selection system for speech synthesis", "authors": [ { "first": "Alistair", "middle": [], "last": "Conkie", "suffix": "" } ], "year": 1999, "venue": "137th meeting of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alistair Conkie. 1999. Robust unit selection system for speech synthesis. In 137th meeting of the Acoustical Society of America, page 978.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "J\u00f6kull J\u00f3hannsson, El\u00edn Carstensd\u00f3ttir, Hannes H\u00f6gni Vilhj\u00e1lmsson", "authors": [ { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" }, { "first": "Oddur", "middle": [], "last": "Kjartansson", "suffix": "" } ], "year": 2012, "venue": "Hrafn Loftsson, Sigr\u00fan Helgad\u00f3ttir, Krist\u00edn M J\u00f3hannsd\u00f3ttir, and Eir\u00edkur R\u00f6gnvaldsson", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f3n Gu\u00f0nason, Oddur Kjartansson, J\u00f6kull J\u00f3hanns- son, El\u00edn Carstensd\u00f3ttir, Hannes H\u00f6gni Vilhj\u00e1lms- son, Hrafn Loftsson, Sigr\u00fan Helgad\u00f3ttir, Krist\u00edn M J\u00f3hannsd\u00f3ttir, and Eir\u00edkur R\u00f6gnvaldsson. 2012. Almannaromur: An open Icelandic speech cor- pus. In Spoken Language Technologies for Under- Resourced Languages.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Building an asr corpus using althingi's parliamentary speeches", "authors": [ { "first": "R\u00f3bert", "middle": [], "last": "Inga R\u00fan Helgad\u00f3ttir", "suffix": "" }, { "first": "", "middle": [], "last": "Kjaran", "suffix": "" } ], "year": 2017, "venue": "INTERSPEECH", "volume": "", "issue": "", "pages": "2163--2167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inga R\u00fan Helgad\u00f3ttir, R\u00f3bert Kjaran, Anna Bj\u00f6rk Nikul\u00e1sd\u00f3ttir, and J\u00f3n Gu\u00f0nason. 2017. Building an asr corpus using althingi's parliamentary speeches. In INTERSPEECH, pages 2163-2167.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The lj speech dataset", "authors": [ { "first": "Keith", "middle": [], "last": "Ito", "suffix": "" }, { "first": "Linda", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/ LJ-Speech-Dataset/.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Within-subjects vs. betweensubjects designs: Which to use? Human-Computer Interaction: An Empirical Research Perspective", "authors": [ { "first": "Mackenzie", "middle": [], "last": "Scott", "suffix": "" } ], "year": 2002, "venue": "", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I Scott MacKenzie. 2002. Within-subjects vs. between- subjects designs: Which to use? Human-Computer Interaction: An Empirical Research Perspective, 7:2005.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Samr\u00f3mur: Crowd-sourcing data collection for Icelandic speech recognition", "authors": [ { "first": "David", "middle": [ "Erik" ], "last": "Mollberg", "suffix": "" }, { "first": "\u00d3lafur", "middle": [], "last": "Helgi J\u00f3nsson", "suffix": "" }, { "first": "Sunneva", "middle": [], "last": "\u00deorsteinsd\u00f3ttir", "suffix": "" }, { "first": "Stein\u00fe\u00f3r", "middle": [], "last": "Steingr\u00edmsson", "suffix": "" }, { "first": "Eyd\u00eds", "middle": [], "last": "Huld Magn\u00fasd\u00f3ttir", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Gudnason", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "3463--3467", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Erik Mollberg, \u00d3lafur Helgi J\u00f3nsson, Sunneva \u00deorsteinsd\u00f3ttir, Stein\u00fe\u00f3r Steingr\u00edmsson, Eyd\u00eds Huld Magn\u00fasd\u00f3ttir, and Jon Gudnason. 2020. Samr\u00f3mur: Crowd-sourcing data collection for Icelandic speech recognition. In Proceedings of the 12th Confer- ence on Language Resources and Evaluation, pages 3463-3467.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bootstrapping a Text Normalization System for an Inflected Language. Numbers as a Test Case", "authors": [ { "first": "Anna", "middle": [], "last": "Bj\u00f6rk Nikul\u00e1sd\u00f3ttir", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": 2019, "venue": "IN-TERSPEECH", "volume": "", "issue": "", "pages": "4455--4459", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Bj\u00f6rk Nikul\u00e1sd\u00f3ttir and J\u00f3n Gu\u00f0nason. 2019. Bootstrapping a Text Normalization System for an Inflected Language. Numbers as a Test Case. In IN- TERSPEECH, pages 4455-4459.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Anton Karl Ingason, Hrafn Loftsson, Eir\u00edkur R\u00f6gnvaldsson, Einar Freyr Sigur\u00f0sson, and Stein\u00fe\u00f3r Steingr\u00edmsson. 2020. Language technology programme for Icelandic", "authors": [ { "first": "Anna", "middle": [], "last": "Bj\u00f6rk Nikul\u00e1sd\u00f3ttir", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "3414--3422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Bj\u00f6rk Nikul\u00e1sd\u00f3ttir, J\u00f3n Gu\u00f0nason, Anton Karl Ingason, Hrafn Loftsson, Eir\u00edkur R\u00f6gnvaldsson, Einar Freyr Sigur\u00f0sson, and Stein\u00fe\u00f3r Steingr\u00edmsson. 2020. Language technology programme for Ice- landic 2019-2023. page 3414-3422.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An Icelandic pronunciation dictionary for tts", "authors": [ { "first": "Anna", "middle": [], "last": "Bj\u00f6rk Nikul\u00e1sd\u00f3ttir", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" }, { "first": "Eir\u00edkur", "middle": [], "last": "R\u00f6gnvaldsson", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE Spoken Language Technology Workshop (SLT)", "volume": "", "issue": "", "pages": "339--345", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Bj\u00f6rk Nikul\u00e1sd\u00f3ttir, J\u00f3n Gu\u00f0nason, and Eir\u00edkur R\u00f6gnvaldsson. 2018. An Icelandic pronunciation dictionary for tts. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 339-345. IEEE.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Fastspeech: Fast, robust and controllable text to speech", "authors": [ { "first": "Yi", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Yangjun", "middle": [], "last": "Ruan", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Zhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Tie-Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3171--3180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: Fast, robust and controllable text to speech. In Ad- vances in Neural Information Processing Systems, pages 3171-3180.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Manual speech synthesis data acquisition-from script design to recording speech", "authors": [ { "first": "Atli", "middle": [], "last": "Sigurgeirsson", "suffix": "" }, { "first": "Gunnar", "middle": [], "last": "\u00d6rn\u00f3lfsson", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)", "volume": "", "issue": "", "pages": "316--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atli Sigurgeirsson, Gunnar \u00d6rn\u00f3lfsson, and J\u00f3n Gu\u00f0- nason. 2020. Manual speech synthesis data acquisition-from script design to recording speech. In Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced lan- guages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL), pages 316-320.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Towards end-toend prosody transfer for expressive speech synthesis with tacotron", "authors": [ { "first": "R", "middle": [ "J" ], "last": "Skerry-Ryan", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Battenberg", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Yuxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daisy", "middle": [], "last": "Stanton", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Shor", "suffix": "" }, { "first": "Ron", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Rif A", "middle": [], "last": "Saurous", "suffix": "" } ], "year": 2018, "venue": "international conference on machine learning", "volume": "", "issue": "", "pages": "4693--4702", "other_ids": {}, "num": null, "urls": [], "raw_text": "RJ Skerry-Ryan, Eric Battenberg, Ying Xiao, Yuxuan Wang, Daisy Stanton, Joel Shor, Ron Weiss, Rob Clark, and Rif A Saurous. 2018. Towards end-to- end prosody transfer for expressive speech synthesis with tacotron. In international conference on ma- chine learning, pages 4693-4702. PMLR.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Char2wav: End-to-end speech synthesis", "authors": [ { "first": "Jose", "middle": [], "last": "Sotelo", "suffix": "" }, { "first": "Soroush", "middle": [], "last": "Mehri", "suffix": "" }, { "first": "Kundan", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Joao", "middle": [ "Felipe" ], "last": "Santos", "suffix": "" }, { "first": "Kyle", "middle": [], "last": "Kastner", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "In ICLR2017 workshop submission", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jose Sotelo, Soroush Mehri, Kundan Kumar, Joao Fe- lipe Santos, Kyle Kastner, Aaron Courville, and Yoshua Bengio. 2017. Char2wav: End-to-end speech synthesis. In In ICLR2017 workshop sub- mission.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "M\u00e1lr\u00f3mur: A manually verified corpus of recorded Icelandic speech", "authors": [ { "first": "Stein\u00fe\u00f3r", "middle": [], "last": "Steingr\u00edmsson", "suffix": "" }, { "first": "J\u00f3n", "middle": [], "last": "Gu\u00f0nason", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "237--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stein\u00fe\u00f3r Steingr\u00edmsson, J\u00f3n Gu\u00f0nason, Sigr\u00fan Hel- gad\u00f3ttir, and Eir\u00edkur R\u00f6gnvaldsson. 2017. M\u00e1lr\u00f3- mur: A manually verified corpus of recorded Ice- landic speech. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 237-240.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Eir\u00edkur R\u00f6gnvaldsson, Starka\u00f0ur Barkarson, and J\u00f3n Gu\u00f0nason", "authors": [ { "first": "Stein\u00fe\u00f3r", "middle": [], "last": "Steingr\u00edmsson", "suffix": "" }, { "first": "Sigr\u00fan", "middle": [], "last": "Helgad\u00f3ttir", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stein\u00fe\u00f3r Steingr\u00edmsson, Sigr\u00fan Helgad\u00f3ttir, Eir\u00edkur R\u00f6gnvaldsson, Starka\u00f0ur Barkarson, and J\u00f3n Gu\u00f0- nason. 2018. Risam\u00e1lheild: A very large Icelandic text corpus. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Tacotron: Towards end-to-end speech synthesis", "authors": [ { "first": "Yuxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daisy", "middle": [], "last": "Skerry-Ryan", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Stanton", "suffix": "" }, { "first": "Ron", "middle": [ "J" ], "last": "Wu", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Weiss", "suffix": "" }, { "first": "Zongheng", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "4006--4010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, et al. 2017. Tacotron: Towards end-to-end speech synthesis. pages 4006-4010.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis", "authors": [ { "first": "Yuxuan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daisy", "middle": [], "last": "Stanton", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Skerry-Ryan", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Battenberg", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Shor", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Rif A", "middle": [], "last": "Ren", "suffix": "" }, { "first": "", "middle": [], "last": "Saurous", "suffix": "" } ], "year": 2018, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "5180--5189", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuxuan Wang, Daisy Stanton, Yu Zhang, RJ Skerry- Ryan, Eric Battenberg, Joel Shor, Ying Xiao, Ye Jia, Fei Ren, and Rif A Saurous. 2018. Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis. In International Con- ference on Machine Learning, pages 5180-5189. PMLR.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Prosodypro. a praat script for large-scale systematic analysis of continuous prosodic events", "authors": [ { "first": "Y", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of Tools and Resources for the Analysis of Speech Prosody", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y Xu. 2016. Prosodypro. a praat script for large-scale systematic analysis of continuous prosodic events. In In Proceedings of Tools and Resources for the Analysis of Speech Prosody (TRASP 2013).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Simultaneous modeling of spectrum, pitch and duration in hmm-based speech synthesis", "authors": [ { "first": "Takayoshi", "middle": [], "last": "Yoshimura", "suffix": "" }, { "first": "Keiichi", "middle": [], "last": "Tokuda", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Masuko", "suffix": "" }, { "first": "Takao", "middle": [], "last": "Kobayashi", "suffix": "" }, { "first": "Tadashi", "middle": [], "last": "Kitamura", "suffix": "" } ], "year": 1999, "venue": "Sixth European Conference on Speech Communication and Technology", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takayoshi Yoshimura, Keiichi Tokuda, Takashi Ma- suko, Takao Kobayashi, and Tadashi Kitamura. 1999. Simultaneous modeling of spectrum, pitch and duration in hmm-based speech synthesis. In Sixth European Conference on Speech Communica- tion and Technology.", "links": null } }, "ref_entries": { "TABREF2": { "text": "The results of 15,956 recording analyses.", "type_str": "table", "html": null, "content": "
The evaluators judge long silences as prompt mis-
matches resulting in 196 prompt mismatch evalu-
ations. Subtracting those results in a much lower
number or 39.
", "num": null }, "TABREF3": { "text": "147.77 198.82 \u00b1 22.56 246.68 1.30 6.01 \u00b1 1.55 14.38 B 1.73 10.19 76.58 150.71 \u00b1 24.57 215.14 2.22 7.68 \u00b1 1.91 18.68 C 1.89 11.20 107.62 173.61 \u00b1 24.52 331.10 2.71 7.48 \u00b1 1.77 17.76 D 2.24 13.28 79.69 143.69 \u00b1 28.02 210.22 0.91 6.57 \u00b1 1.53 15.", "type_str": "table", "html": null, "content": "
IDSR words / sec chars / secMinF0 Mean \u00b1 SDDuration Max Min Mean \u00b1 SD Max
A2.3413.70 97
E2.9417.39 128.37 210.74 \u00b1 27.00 294.88 1.86 5.65 \u00b1 1.50 14.46
F3.2619.33 102.03 128.10 \u00b1 12.89 165.02 1.78 5.28 \u00b1 1.36 12.96
G2.3914.13 154.71 237.08 \u00b1 20.60 271.69 2.26 6.43 \u00b1 1.64 14.82
H2.6015.4298.45 142.69 \u00b1 23.36 213.84 1.44 6.09 \u00b1 1.56 14.57
", "num": null }, "TABREF4": { "text": "\u00b1 0.33 3.02 \u00b1 0.31 3.55 \u00b1 0.29 E 4.13 \u00b1 0.28 2.87 \u00b1 0.32 3.72 \u00b1 0.28 F 3.54 \u00b1 0.31 3.10 \u00b1 0.34 2.87 \u00b1 0.34 G 4.27 \u00b1 0.22 2.91 \u00b1 0.33 3.32 \u00b1 0.30 H 3.97 \u00b1 0.28 3.15 \u00b1 0.27 3.73 \u00b1 0.28", "type_str": "table", "html": null, "content": "
IDQ1Q2Q3
A2.78 \u00b1 0.36 2.84 \u00b1 0.33 2.80 \u00b1 0.32
B1.82 \u00b1 0.36 1.66 \u00b1 0.30 1.50 \u00b1 0.30
C2.96 \u00b1 0.37 1.95 \u00b1 0.38 2.14 \u00b1 0.35
D3.57
The quality and amount of
", "num": null } } } }