{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:10.559970Z" }, "title": "How Might We Create Better Benchmarks for Speech Recognition?", "authors": [ { "first": "Al\u00ebna", "middle": [], "last": "Aks\u00ebnova", "suffix": "", "affiliation": {}, "email": "" }, { "first": "James", "middle": [], "last": "Flynn", "suffix": "", "affiliation": {}, "email": "jpflynn@google.com" }, { "first": "Pavel", "middle": [], "last": "Golik", "suffix": "", "affiliation": {}, "email": "golik@google.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The applications of automatic speech recognition (ASR) systems are proliferating, in part due to recent significant quality improvements. However, as recent work indicates, even state-of-the-art speech recognition systems-some which deliver impressive benchmark results, struggle to generalize across use cases. We review relevant work, and, hoping to inform future benchmark development, outline a taxonomy of speech recognition use cases, proposed for the next generation of ASR benchmarks. We also survey work on metrics, in addition to the de facto standard Word Error Rate (WER) metric, and we introduce a versatile framework designed to describe interactions between linguistic variation and ASR performance metrics.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The applications of automatic speech recognition (ASR) systems are proliferating, in part due to recent significant quality improvements. However, as recent work indicates, even state-of-the-art speech recognition systems-some which deliver impressive benchmark results, struggle to generalize across use cases. We review relevant work, and, hoping to inform future benchmark development, outline a taxonomy of speech recognition use cases, proposed for the next generation of ASR benchmarks. We also survey work on metrics, in addition to the de facto standard Word Error Rate (WER) metric, and we introduce a versatile framework designed to describe interactions between linguistic variation and ASR performance metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The applications of ASR systems are many and varied; conversational virtual assistants on smartphones and smart-home devices, automatic captioning for videos, text dictation, and phone chat bots for customer support, to name a few. This proliferation has been enabled by significant gains in ASR quality. ASR quality is typically measured by word error rate (WER), or, informally, the Levenshtein distance between the target transcript and the machine-generated transcript (Levenshtein, 1966; Wang et al., 2003) -see Section 3.", "cite_spans": [ { "start": 473, "end": 492, "text": "(Levenshtein, 1966;", "ref_id": "BIBREF45" }, { "start": 493, "end": 511, "text": "Wang et al., 2003)", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Current state-of-the-art accuracy is now in low-single-digits for the widely used Librispeech benchmark set (Panayotov et al., 2015) , with e.g. Zhang et al. (2020) achieving a WER of 1.4%. However, as Szyma\u0144ski et al. (2020) have pointed out, overall, our current ASR benchmarks leave much to be desired when it comes to evaluating performance across multiple real-world applications. Typical benchmark sets beyond Librispeech include TIMIT (Garofolo et al., 1993) , Switchboard (Godfrey et al., 1992) , WSJ (Paul and Baker, 1992) , CALLHOME (Canavan et al., 1997) , and Fisher (Cieri et al., 2004 These benchmark sets cover a range of speech use cases, including read speech (e.g. Librispeech), and spontaneous speech (e.g. Switchboard).", "cite_spans": [ { "start": 108, "end": 132, "text": "(Panayotov et al., 2015)", "ref_id": "BIBREF61" }, { "start": 145, "end": 164, "text": "Zhang et al. (2020)", "ref_id": "BIBREF90" }, { "start": 202, "end": 225, "text": "Szyma\u0144ski et al. (2020)", "ref_id": "BIBREF81" }, { "start": 442, "end": 465, "text": "(Garofolo et al., 1993)", "ref_id": "BIBREF22" }, { "start": 480, "end": 502, "text": "(Godfrey et al., 1992)", "ref_id": "BIBREF24" }, { "start": 505, "end": 531, "text": "WSJ (Paul and Baker, 1992)", "ref_id": null }, { "start": 543, "end": 565, "text": "(Canavan et al., 1997)", "ref_id": "BIBREF11" }, { "start": 579, "end": 598, "text": "(Cieri et al., 2004", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, with many ASR systems benchmarking in the low single digits, small improvements have become increasingly difficult to interpret, and any remaining errors may be concentrated. For example, for Switchboard, a considerable portion of the remaining errors involve filler words, hesitations and non-verbal backchannel cues (Xiong et al., 2017; Saon et al., 2017) .", "cite_spans": [ { "start": 327, "end": 347, "text": "(Xiong et al., 2017;", "ref_id": "BIBREF87" }, { "start": 348, "end": 366, "text": "Saon et al., 2017)", "ref_id": "BIBREF74" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Furthermore, achieving state-of-the-art results on one of these sets does not necessarily mean that an ASR system will generalize successfully when faced with input from a wide range of domains at inference time: as Likhomanenko et al. (2020) show, \"no single validation or test set from public datasets is sufficient to measure transfer to other public datasets or to real-world audio data\". In one extreme example, Keung et al. (2020) show that modern ASR architectures may even start emitting repetitive, nonsensical transcriptions when faced with audio from a domain that was not covered at training time-even in cases where it would have achieved perfectly acceptable Librispeech evaluation numbers. Inspired by Goodhart's law, which states that any measure that becomes a target ceases to be a good measure, we argue that as a field, it behooves us to think more about better benchmarks in order to gain a well-rounded view of the performance of ASR systems across domains.", "cite_spans": [ { "start": 216, "end": 242, "text": "Likhomanenko et al. (2020)", "ref_id": "BIBREF49" }, { "start": 417, "end": 436, "text": "Keung et al. (2020)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we make three contributions. First, we provide a taxonomy of relevant domains, based on our experience developing ASR systems for use in many different products, with the goal of helping make nextgeneration benchmarks as representative as possible (Biber, 1993) . Second, we argue that optimizing only for WER, as most current benchmarks imply, does not reflect considerations that are ubiquitous in real-world deployments of ASR technology: for example, prohttps://github.com/syhw/wer_are_we. Additionally, FAIR recently released the Casual Conversations dataset intended for AI fairness measurements (Hazirbas et al., 2021) . duction considerations such as latency and compute resources can imply additional interrelated optimization objectives. We survey relevant work on additional metrics that can be used to measure ASR systems. Third, we describe what metadata would be useful in nextgeneration benchmark data sets in order to help analyze the interaction between linguistic variation and performance of ASR systems-for example, to measure how well an ASR system holds up in the face of sociolinguistic variation within the target language, or second-language accents, as in e.g. Feng et al. (2021) .", "cite_spans": [ { "start": 263, "end": 276, "text": "(Biber, 1993)", "ref_id": "BIBREF7" }, { "start": 617, "end": 640, "text": "(Hazirbas et al., 2021)", "ref_id": null }, { "start": 1202, "end": 1220, "text": "Feng et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With ASR use cases spanning many applications and tasks, ideally ASR systems would be robust to various classes of variation in speech input. For example, an ASR system which provides automatic captions for video meetings would recognize words from many different semantic fields, adaptable to the topic of the meeting. Speech characteristics may also vary across domains: for example, the speech style used when dictating text messages differs from the style of a group conversation, where speakers may occasionally talk over each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASR Use Cases", "sec_num": "2" }, { "text": "An ideal benchmark set would include what we will call 'horizontal' and 'vertical' variation. Horizontal challenges refer to a wide variety of scenarios where ASR may be used, while vertical challenges involve e.g. diversity in topics, encoding formats, and others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASR Use Cases", "sec_num": "2" }, { "text": "2.1 Horizontals: ASR applications ASR application domains can be roughly subdivided based on the number of speakers, the mode of speech (spontaneous vs. prepared speech) and the intended recipient (human or device). An ideal benchmark set would cover as many of these horizontals as possible-e.g. through merging existing benchmark sets, as does Likhomanenko et al. (2020) , and adding additional data to cover any gaps.", "cite_spans": [ { "start": 346, "end": 372, "text": "Likhomanenko et al. (2020)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "ASR Use Cases", "sec_num": "2" }, { "text": "Dictation Text dictation is a popular use case of ASR systems -one of the first successful commercial applications with broad appeal. This feature serves both convenience and accessibility, allowing users to enter text without manually typing. Dictation tends to involve relatively slow speech, typically that of a single speaker, who is aware they are interacting with a device, and who may consciously modify their speech patterns to facilitate device understanding (Cohn et al., 2020) . Dictation may have applications in many fields. One with many idiosyncratic challenges is medical dictation, where ASR systems are used to help medical personnel take notes and generate medical records (Miner et al., 2020; Mani et al., 2020) . This poses challenges in the support of domain-specific jargon, which we will discuss in subsection 2.2. In a related application, dictation practice is sometimes used by language learners, often in combination with a pronunciation feedback system (McCrocklin, 2019) . In other contexts, transcription of dictated audio may be part of a composite pipeline, such as in automatic translation, where the initial transcript feeds a subsequent system for translation to another language.", "cite_spans": [ { "start": 468, "end": 487, "text": "(Cohn et al., 2020)", "ref_id": "BIBREF17" }, { "start": 692, "end": 712, "text": "(Miner et al., 2020;", "ref_id": "BIBREF55" }, { "start": 713, "end": 731, "text": "Mani et al., 2020)", "ref_id": "BIBREF51" }, { "start": 982, "end": 1000, "text": "(McCrocklin, 2019)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "ASR Use Cases", "sec_num": "2" }, { "text": "Voice Search and Control Voice search and other conversational assistant products enable users to access information or invoke actions via spoken input. Similar to dictation, audio in such settings is typically single-speaker, with human-to-device characteristics. Compared to dictation, queries may be somewhat shorter, and may contain proper nouns (e.g. place names or business names). Semiotic-class tokens such as times (Sproat et al., 2001) are also more common in this setting. A related type of human-to-device speech is interactive voice response (IVR), where callers to customer support may first interact with a voice chatbot, which can help gather information prior to redirecting the call, or potentially resolve issues itself. (Inam et al., 2017) .", "cite_spans": [ { "start": 424, "end": 445, "text": "(Sproat et al., 2001)", "ref_id": "BIBREF79" }, { "start": 740, "end": 759, "text": "(Inam et al., 2017)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "ASR Use Cases", "sec_num": "2" }, { "text": "Voicemails, Oration, and Audiobooks While dictation users may modify their speech based on the knowledge that they are dictating directly to a device, ASR systems may also be used to help provide transcriptions for voicemail messages (Padmanabhan et al., 2002; Liao et al., 2010) , parliamentary speeches (Gollan et al., 2005; Steingr\u00edmsson et al., 2020) , and so on. Such settings, while still typically single-speaker, include artifacts of spontaneity-e.g. fillers or hesitations like 'uh', backchannel speech, as well as disfluencies, false starts, and corrections (Jamshid Lou and Johnson, 2020; Mendelev et al., 2021; Knudsen et al., 2020) . Transcribing audiobooks includes elements of dictation and oration: due to their read-speech nature, audiobooks typically contain less spontaneity than typical human-to-human speech (Igras-Cybulska et al.), but they are usually more natural than human-to-device speech. 2", "cite_spans": [ { "start": 234, "end": 260, "text": "(Padmanabhan et al., 2002;", "ref_id": "BIBREF60" }, { "start": 261, "end": 279, "text": "Liao et al., 2010)", "ref_id": "BIBREF47" }, { "start": 305, "end": 326, "text": "(Gollan et al., 2005;", "ref_id": "BIBREF25" }, { "start": 327, "end": 354, "text": "Steingr\u00edmsson et al., 2020)", "ref_id": "BIBREF80" }, { "start": 568, "end": 599, "text": "(Jamshid Lou and Johnson, 2020;", "ref_id": "BIBREF35" }, { "start": 600, "end": 622, "text": "Mendelev et al., 2021;", "ref_id": "BIBREF54" }, { "start": 623, "end": 644, "text": "Knudsen et al., 2020)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "ASR Use Cases", "sec_num": "2" }, { "text": "In settings such as human-to-human conversations, the task of the ASR system typically involves transcribing spontaneous speech among several participants within a single audio recording. For example, meeting transcription can help to improve accessibility of video meetings, or may serve to document conversations (Kanda et al., 2021) ; see e.g. Janin et al. (2004) ; Carletta et al. (2005) for relevant data sets. Another use case for transcriptions of human-to-human conversations is customer-agent conversations, as well as other types of telephony, which can help monitor the quality of phone-based customer service.", "cite_spans": [ { "start": 315, "end": 335, "text": "(Kanda et al., 2021)", "ref_id": "BIBREF37" }, { "start": 347, "end": 366, "text": "Janin et al. (2004)", "ref_id": null }, { "start": 369, "end": 391, "text": "Carletta et al. (2005)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conversations and Meetings", "sec_num": null }, { "text": "Podcasts, Movies and TV Podcast transcription forms a related, and fast-growing, application area, with recent data sets including Clifton et al. (2020) . Podcast transcription is in some ways similar to the long-standing task of automatically transcribing interviews, e.g. to help make them more accessible, as in various oral-history projects (Byrne et al., 2004) . Finally, another similar use case is the transcription of motion pictures, including documentaries, which may require increased robustness to non-speech audio, such as music and special effects. Spontaneous speech is common to these human-to-human, multi-speaker settings, with fillers such as 'uh', overlap, and interruption between speakers. We draw a distinction between movie subtitling and TV closed captioning. Subtitling is an 'offline' task in that the entire audio is available to the ASR system at recognition time, and the setting allows for multiple passes, including human post-editors. Compare to closed captioning, where streaming ASR processes a live broadcast with tight latency constraints. Additionally, these two modes have different transcription conventions and formatting requirements. Subtitles often contain non-verbal cues that support comprehension for hearing impaired, and are optimized for readability. Conversely, closed captions are often projected in upper case with fewer constraints, such as line breaks, to denote speaker turns.", "cite_spans": [ { "start": 131, "end": 152, "text": "Clifton et al. (2020)", "ref_id": "BIBREF16" }, { "start": 345, "end": 365, "text": "(Byrne et al., 2004)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conversations and Meetings", "sec_num": null }, { "text": "ASR applications do not just differ in the style of speech. Other dimensions include: the semantic content of the input speech (a lecture about nuclear physics involves very different terminology than a phone conversation to set up a car maintenance appointment), the audio encoding format, and sample rate, among others. Again, the ideal benchmark should cover as many of these factors as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verticals: Technical challenges", "sec_num": "2.2" }, { "text": "Terminology and Phrases ASR systems applied to a wide range of domains need to recognize hundreds of thousands, if not millions, of distinct words. Such systems typically involve a language model trained on large volumes of text from multiple sources. To benchmark an ASR system's capability across a wide range of topics, test sets could include terms and phrases from many different fields: consider medical terminology (e.g. 'ribonucleotides'), historical phrases (e.g. 'Yotvingians'), and many more. ASR systems should also be savvy to neologisms (e.g. 'doomscrolling'), although, admittedly, the fast-changing nature of neologisms and trending phrases makes this particularly challenging. Another area that deserves special attention in measurements is loanwords, which may have pronunciations that involve unusual grapheme-to-phoneme correspondences; such words may even necessitate personalized pronunciation learning (Bruguier et al., 2016) .", "cite_spans": [ { "start": 925, "end": 948, "text": "(Bruguier et al., 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Verticals: Technical challenges", "sec_num": "2.2" }, { "text": "Speed Recordings where speech is significantly faster or slower than average may pose additional recognition challenges (Siegler and Stern, 1995; Fosler-Lussier and Morgan, 1999) , so the ideal benchmark should also cover samples with various speech rates. This is particularly important for paid services, where users sometimes artificially speed up the recordings or cut out easily detectable portions of silence in order to reduce costs. Such processing can introduce unnatural shifts in pitch and add confusion to the punctuation at speaker turn, and sentence boundaries.", "cite_spans": [ { "start": 120, "end": 145, "text": "(Siegler and Stern, 1995;", "ref_id": "BIBREF78" }, { "start": 146, "end": 178, "text": "Fosler-Lussier and Morgan, 1999)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Verticals: Technical challenges", "sec_num": "2.2" }, { "text": "Acoustic Environment The setting in which the input audio was recorded (real-life or phone conversation, video call, dictation) can also materially impact ASR performance, and settings with high amounts of background noise can be particularly challenging. Ideally, test sets should be available to measure how robust an ASR system is in the face of background noise and other environmental factors (Park et al., 2019; Kinoshita et al., 2020) . The entertainment domain contains a large amount of scenes with background music, which often have lyrics that are usually not meant to be transcribed. Even call center conversations sometimes contain hold music which is not part of the payload of the call.", "cite_spans": [ { "start": 398, "end": 417, "text": "(Park et al., 2019;", "ref_id": "BIBREF63" }, { "start": 418, "end": 441, "text": "Kinoshita et al., 2020)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Verticals: Technical challenges", "sec_num": "2.2" }, { "text": "Encoding Formats Lastly, different audio encodings (linear PCM, A-law, \u00b5-law), codecs (FLAC, OPUS, MP3) and non-standard sample rates such as 17 kHz may affect recognition quality, and should be represented (Sanderson and Paliwal, 1997; Hokking et al., 2016) . The same holds for audio that has been up-or down-sampled, e.g. between 8 kHz typical for telephony and 16 kHz or above, for broadcast media.", "cite_spans": [ { "start": 207, "end": 236, "text": "(Sanderson and Paliwal, 1997;", "ref_id": "BIBREF73" }, { "start": 237, "end": 258, "text": "Hokking et al., 2016)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Verticals: Technical challenges", "sec_num": "2.2" }, { "text": "We argue that the more horizontal and vertical areas are covered by a benchmark, the more representative it will be, and hence the more appropriate for measuring ASR progress. There are some practical matters that are also important to consider when creating the ideal benchmark.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Practical Issues", "sec_num": "2.3" }, { "text": "Transcription Conventions Creating transcriptions of human speech in a consistent manner can be unexpectedly challenging: for example, should hesitations like 'uh' be transcribed? How should transcribers handle unusual cases like the artist 'dead mouse', which is written as 'deadmau5' by convention? And if a speaker says 'wanna', should the transcription reflect that as such, or should the transcriber transcribe that as 'want to'? The answer to such questions will depend on the downstream use context (e.g. a dialog system, where hesitations may be useful, or an email message, where they may need to be omitted instead). For example, while in closed captioning or podcast transcriptions omitting repetitions, disfluencies, and filler words (e.g. \"like\", \"kind of\") is considered desirable, this might not be appropriate for some other ASR domains such as subtitling. Defining and applying a comprehensive set of transcription conventions, as e.g. Switchboard (Godfrey et al., 1992) and CORAAL (Kendall and Farrington, 2020) , is critical in building high-quality data sets. It is also important to detect and correct transcription errors in annotated corpora (Rosenberg, 2012) .", "cite_spans": [ { "start": 965, "end": 987, "text": "(Godfrey et al., 1992)", "ref_id": "BIBREF24" }, { "start": 999, "end": 1029, "text": "(Kendall and Farrington, 2020)", "ref_id": "BIBREF38" }, { "start": 1165, "end": 1182, "text": "(Rosenberg, 2012)", "ref_id": "BIBREF69" } ], "ref_spans": [], "eq_spans": [], "section": "Practical Issues", "sec_num": "2.3" }, { "text": "Perhaps the most important choice in such transcription conventions is whether to adopt 'spoken-domain' transcriptions, where numbers are spelled out in words (e.g. 'three thirty'), or 'written-domain' transcriptions, where they are rendered in the typical written form ('3:30'). Many data sets use spoken-domain transcriptions only, but often in real-world ASR deployments it is valuable for readability and downstream usage (e.g. by a natural-language understanding system), to have fully-formatted, written-domain transcripts, as described by O'Neill et al. (2021)-who also provide a written-domain benchmark data set. Representativeness For any ASR test set, at least two considerations come into play: first, how closely does the test set approximate reality; and second, is the test set sufficiently large to be representative? For example, test sets that are intended to measure how well an ASR system deals with speech with background noise should have a realistic amount of background noise: not too little, but also not too much-e.g. to the point that even human listeners stand no chance of transcribing the audio correctly. Adding noise artificially, as established e.g. by the Aurora corpora (Pearce and Hirsch, 2000; Parihar and Picone, 2002) , does not take into account the Lombard effect. In terms of size, analyses akin to Guyon et al. (1998) are helpful to ensure that any change is statistically significant; we are not aware of much work along these lines for ASR systems specifically, but it seems like it would be worthwhile to explore this area more. The ultimate goal should be to increase the predictive power of error metrics.", "cite_spans": [ { "start": 1205, "end": 1230, "text": "(Pearce and Hirsch, 2000;", "ref_id": "BIBREF66" }, { "start": 1231, "end": 1256, "text": "Parihar and Picone, 2002)", "ref_id": "BIBREF62" }, { "start": 1341, "end": 1360, "text": "Guyon et al. (1998)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Practical Issues", "sec_num": "2.3" }, { "text": "Assume, for the sake of argument, that an impressive selection of test sets has been collected in order to create our imagined ideal next-generation benchmark for ASR, covering many use cases, technical challenges, and so on. The performance of an ASR system could now be measured simply by computing a single, overall WER across all the utterances in this collection of test sets-and a system that yields lower WER on this benchmark could be said to be 'better' than a system with higher WER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics: WER and Beyond", "sec_num": "3" }, { "text": "However, in a real-world deployment setting, the question of which system is 'best' typically relies on an analysis of many metrics. For example, imagine a system with a WER of 1.5% but an average transcription latency of 2500 milliseconds, and another system that achieves 1.6% WER but a latency of only 1250 milliseconds: in many settings, the second system could still be more suitable for deployment, despite achieving worse WER results. Of course, 'latency' itself is not a well-defined term: sometimes the measurement is reported as the average delay between the end of each spoken word and the time it is emitted by the ASR system, while in other cases the measure is based only on the first or the last word in an utterance. Neither is well-defined in presence of recognition errors. Yet another kind of latency is end-to-end latency, involving everything between the microphone activity and the final projection of results, including network overhead and optional post-processing like capitalization, punctuation etc. A \"pure\" ASR latency metric ignores those and focuses on the processing time of the recognizer, while latency in the context of voice assistant commands may consider the delay before successful recognition of a command, which might sometimes precede the actual end of utterance. In this section, we describe how, much like latency, even WER itself has many nuances, and we point to other metrics, beyond WER and latency, that can be considered account when measuring ASR systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metrics: WER and Beyond", "sec_num": "3" }, { "text": "The workhorse metric of ASR is the Word Error Rate, or WER. Calculating WER is relatively easy on spoken-domain transcriptions with no formatting (e.g. 'set an alarm for seven thirty') but quickly becomes a nuanced matter when processing written-domain transcriptions-for example, if the ground truth is provided as 'Set an alarm for 7:30.' with capitalization and punctuation, is it an error in WER terms if the system emits lowercase 'set' instead of uppercase 'Set', as given in the ground truth? Typically, for standard WER calculations in such scenarios, capitalization and word-final punctuation is not considered to be a factor, and other metrics are calculated for fully-formatted WER-e.g. case-sensitive WER, where 'set' vs 'Set' would be considered an error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WER", "sec_num": "3.1" }, { "text": "WER can also be calculated on only a subset of relevant words or phrases: for example, it may be helpful to compute separate error rates for different kinds of semiotic classes, such as spoken punctuation, times, or phone numbers-as well as for different semantic areas, such as relevant domain terminology vs. generic English words. The assessment of ASR quality on rare phrases is yet another issue-average WER does not always adequately reflect how well an ASR system picks up rare yet important words, suggesting it may be valuable to know WER for common and less common words. A related approach is to use precision-recall, e.g. as Chiu et al. (2018) do for medical terminology. Such 'sliced' approaches can help provide insight into the recognition quality of words or phrases that are particularly salient in a given setting. For example, if a system that is intended for use in a voicemail transcription setting achieves 3% overall WER, but it mistranscribes every phone number, that system would almost certainly not be preferred over a system that achieves 3.5% overall WER, but that makes virtually no mistakes on phone numbers. As Peyser et al. (2019) show, such examples are far from theoretical; fortunately, as they show, it is also possible to create synthetic test sets using text-to-speech systems to get a sense of WER in a specific context. Standard tools like NIST SCLITE 3 can be used to calculate WER and various additional statistics.", "cite_spans": [ { "start": 637, "end": 655, "text": "Chiu et al. (2018)", "ref_id": "BIBREF14" }, { "start": 1143, "end": 1163, "text": "Peyser et al. (2019)", "ref_id": "BIBREF68" } ], "ref_spans": [], "eq_spans": [], "section": "WER", "sec_num": "3.1" }, { "text": "Importantly, it is possible to calculate the local WER on any level of granularity: utterance, speaker turn, file, entire recording etc. The average WER alone, weighted by the number of words, is not sufficient to describe the shape of the distribution over the individual local measurements. Given two ASR systems with identical WERs, we almost always prefer the one with the lower standard deviation, as it reduces the uncertainty w.r.t. the worst case. A more accurate metric that samples the shape of the distribution consists of percentiles (e.g. 90, 95 or 99) that are more suitable to provide an upper bound. Additionally, reporting the standard deviation allows researchers to judge whether an improvement in WER is significant or just a statistical fluctuation. The same argument holds true for latency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WER", "sec_num": "3.1" }, { "text": "Finally, WER can also be calculated on not just the top machine hypothesis, but also on the full n-best list, as in e.g. Biadsy et al. (2017).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WER", "sec_num": "3.1" }, { "text": "3 https://www.nist.gov/itl/iad/mig/tools", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WER", "sec_num": "3.1" }, { "text": "Correctly transcribing speech into text is the most critical part of an ASR system, but downstream use cases may require more than just a word-by-word textual transcription of the input audio. For example, having per-word confidence scores can be helpful in dialog systems (Yu et al., 2011) ; having accurate timestamps at the word level is essential in many application of the long form domain, such as closed captioning, subtitling and keyword search; having phonemic transcriptions for every word enables downstream disambiguation (e.g. when the transcription gives 'live', did the user say the adjective [l\u0131v] or the verb [la\u0131v]); and emitting word timings to indicate where each word appeared in the audio can be important for search applications, especially for longer recordings. The ideal ASR benchmark would also make it possible to verify this metadata: for example, if it is possible to use forced alignment to infer where in the audio words appear, and to check how accurately an ASR system is emitting word timings (Sainath et al., 2020a) . speaker diarization is yet another type of of metadata that can be emitted at a per-word or per-phrase level, for which independent benchmarks already exist (Ryant et al., 2021) .", "cite_spans": [ { "start": 273, "end": 290, "text": "(Yu et al., 2011)", "ref_id": "BIBREF88" }, { "start": 608, "end": 613, "text": "[l\u0131v]", "ref_id": null }, { "start": 1028, "end": 1051, "text": "(Sainath et al., 2020a)", "ref_id": "BIBREF71" }, { "start": 1211, "end": 1231, "text": "(Ryant et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Metadata about Words", "sec_num": "3.2" }, { "text": "A general metric for the processing speed is the real-time factor (RTF), commonly defined as the ratio between the processing wall-clock time and the raw audio duration (Liu, 2000) . Streaming ASR systems are required to operate at an RTF below one, but in applications that do not require immediate processing an RTF over one might be acceptable. As with WER and latency, RTF samples form a distribution, whose shape is important in understanding the behavior in the worst case. The process of finding the most likely hypothesis in ASR (often referred to as \"decoding\" for historical reasons) requires an efficient exploration of the search space: a subset of all possible hypotheses. The larger the search space, the slower the search, but the more likely is the recognizer to find the correct hypothesis. A small search space allows for quick decoding, but often comes at the cost of higher WER. It is common to report an RTF vs WER curve which shows all possible operating points, allowing for mutual trade off. Note this definition operates with the wall-clock time, thus ignoring the hardware requirements. It is common to normalize the RTF by the number of CPU cores and hardware accelerators.", "cite_spans": [ { "start": 169, "end": 180, "text": "(Liu, 2000)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Real-Time Factor", "sec_num": "3.3" }, { "text": "For ASR systems that stream output to the user while recognition is ongoing, as in many voice assistant and dictation applications, additional metrics will be useful, e.g. measuring the stability of partial results, which reflects the number of times the recognizer changes previously emitted words while recognizing a query (Shangguan et al., 2020) . A related dimension is quality of the intermediate hypotheses: a streaming system that emits highly inaccurate intermediate hypotheses can yield a jarring user experience, even if the final hypothesis achieves an acceptable WER. This is particularly important in combination with a downstream application like machine translation that can be very sensitive to corrections in partial hypotheses (Ansari et al., 2020 ).", "cite_spans": [ { "start": 325, "end": 349, "text": "(Shangguan et al., 2020)", "ref_id": "BIBREF75" }, { "start": 746, "end": 766, "text": "(Ansari et al., 2020", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Streaming ASR", "sec_num": "3.4" }, { "text": "Yet another factor is streaming latency, e.g. how quickly partials are emitted (Shangguan et al., 2021) , and more generally, the delay between the end of the user's input and the finalized transcription (Sainath et al., 2020b; . The accuracy of the endpointer module can significantly affect this latency: endpointers need to strike the right balance between keeping the microphone open while the user may still continue speaking (e.g. if the user pauses briefly to collect their thoughts), while closing it as soon as the user is likely to be done speaking, and a number of relevant endpointer metrics can be calculated, as in e.g. .", "cite_spans": [ { "start": 79, "end": 103, "text": "(Shangguan et al., 2021)", "ref_id": "BIBREF76" }, { "start": 204, "end": 227, "text": "(Sainath et al., 2020b;", "ref_id": "BIBREF72" } ], "ref_spans": [], "eq_spans": [], "section": "Streaming ASR", "sec_num": "3.4" }, { "text": "Latency is influenced by many factors beyond the quality of the endpointer: for example, the number of parameters in the ASR model, the surrounding software stack, and the computational resources available will impact the duration of the recognition process for an audio recording, in both streaming and non-streaming -batch recognition settings. Compressing models can help them run faster, and in more settings (Peng et al., 2021) , although the impact of shrinking models should be measured carefully (Hooker et al., 2020a,b) .", "cite_spans": [ { "start": 413, "end": 432, "text": "(Peng et al., 2021)", "ref_id": "BIBREF67" }, { "start": 504, "end": 528, "text": "(Hooker et al., 2020a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Inference and Training", "sec_num": "3.5" }, { "text": "Beyond inference, training may also be worth benchmarking in more detail: factors such as the number of parameters in the model, the model architecture, the amount of data used, the training software, and the hardware available will influence how long it takes to train an ASR model using a given algorithm. Benchmarks such as MLPerf (Mattson et al., 2020) do not yet incorporate speech recognition, but this may be worth exploring in the future.", "cite_spans": [ { "start": 334, "end": 356, "text": "(Mattson et al., 2020)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Inference and Training", "sec_num": "3.5" }, { "text": "Certain phrases or words are sometimes expected in dialogue contexts (e.g. 'yes' or 'no'), along with particular types of words (e.g. brand names in the context of shopping). In such cases, ASR systems may al-low for contextual biasing to increase the language model probability of relevant words or phrases (Aleksic et al., 2015) . Measuring contextual biasing typically involves evaluating a relevant test set twice: once with, and once without the contextual biasing enabled (the default behavior). Even when contextual biasing is enabled, it will typically be desirable for the system to continue to recognize other words and phrases without too much of an accuracy impact, so that recognition results remain reasonable in the event that the input does not contain the words or phrases that were expected-typically anti-sets will be used, as described by Aleksic et al. (2015) . Contextual biasing plays a key role in classical dialogue systems like IVR.", "cite_spans": [ { "start": 308, "end": 330, "text": "(Aleksic et al., 2015)", "ref_id": "BIBREF1" }, { "start": 859, "end": 880, "text": "Aleksic et al. (2015)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Contextual Biasing", "sec_num": "3.6" }, { "text": "In some cases, ASR models can hallucinate transcriptions: e.g. providing transcriptions for audio even where no speech is present, or simply misbehaving on out-of-domain utterances (Liao et al., 2015; Keung et al., 2020) . Intuitively, this type of errors should be reported explicitly as the \"insertion rate\", which is calculated as part of the WER anyway. However, insertion errors are rather rare and do not stand out strongly in presence of speech and natural recognition errors.", "cite_spans": [ { "start": 181, "end": 200, "text": "(Liao et al., 2015;", "ref_id": "BIBREF48" }, { "start": 201, "end": 220, "text": "Keung et al., 2020)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Hallucination", "sec_num": "3.7" }, { "text": "Measuring whether an ASR system is prone to such hallucinations can be done by running it on test sets from domains that were unseen at training time. In addition, it is possible to employ reject sets which contain various kinds of audio that should not result in a transcription: for example, such reject sets may cover various noises (e.g. AudioSet Gemmeke et al. (2017)), silence, speech in other languages, and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hallucination", "sec_num": "3.7" }, { "text": "A related topic is adversarial attacks, when a particular message is 'hidden' in audio in a way that humans cannot hear, but which may deceive ASR systems into transcribing in an unexpected way; measuring robustness to such issues would be desirable, but it remains an active area of research-much like the creation of such attacks more broadly (Carlini and Wagner, 2018) .", "cite_spans": [ { "start": 345, "end": 371, "text": "(Carlini and Wagner, 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Hallucination", "sec_num": "3.7" }, { "text": "Finally, one aspect of ASR systems that tends to be important for real-world deployments, but which is hard to quantify in a numeric metric, is how easy it is to debug and fix any misrecognitions that may arise. For example, if a new word such as 'COVID-19' comes up which is not yet recognized by the system, it would be preferable if adding such a new word could be done without necessitating a full retrain of the system. While quantifying this property of ASR systems is hard, we believe that the degree to which it is easy to debug and fix any ASR system is worth mentioning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Debuggability and Fixability", "sec_num": "3.8" }, { "text": "As previously discussed, the ideal benchmark for ASR systems would cover as many horizontals and verticals as possible, and would involve various kinds of metrics beyond just WER. Another important dimension, however, would be the availability of demographic characteristics, and analyzing the metrics based on such characteristics. Such demographic characteristics may correlate with linguistic variation-for example, non-native speakers of English may have an accent showing traces of their native language-which may in turn impact ASR performance. Having demographic characteristics can help produce analyses like the one reported by Feng et al. (2021) , who analyzed differences in recognition performance for different accents, age ranges, and gender within an ASR system.", "cite_spans": [ { "start": 637, "end": 655, "text": "Feng et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "The ideal benchmark set, then, should include sufficient metadata to run similar analyses, enabling developers to understand how their system behaves when processing various accents or dialects; to see whether factors like gender and age influence recognition performance in their system. Linguistic variation may take many different shapes, including:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "\u2022 phonetic differences, e.g. vowel realizations that are specific to a given accent", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "\u2022 phonological differences, e.g. various number of phonemes in different dialects of a language", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "\u2022 lexical differences, e.g. region-specific terms", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "\u2022 syntactical differences, e.g. double-negatives", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "\u2022 voice quality differences, e.g. pitch differences, which are correlated with parameters such as gender and age (Liao et al., 2015) Fortunately, several data sets already exist with relevant demographic tags for many utterances, e.g. Mozilla Common Voice (Ardila et al., 2020) which offers public data sets across many languages with dialect and accent tags. There are also academic data sets produced by sociolinguists, such as CORAAL for AAVE (Kendall and Farrington, 2020) , ESLORA for Galician Spanish (Barcala et al., 2018) , the Corpus Gesproken Nederlands for Dutch (van Eerten, 2007) , and others. Such corpora provide a useful blueprint for providing such metadata, and we believe that it would be valuable for similar tags to be available for as many other data set as possible. As Andrus et al. (2021) show, at times it will likely be difficult to get the demographic metadata that is needed, but still, getting such data wherever possible is important-as they put it, \"what we can't measure, we can't understand\".", "cite_spans": [ { "start": 113, "end": 132, "text": "(Liao et al., 2015)", "ref_id": "BIBREF48" }, { "start": 256, "end": 277, "text": "(Ardila et al., 2020)", "ref_id": "BIBREF4" }, { "start": 446, "end": 476, "text": "(Kendall and Farrington, 2020)", "ref_id": "BIBREF38" }, { "start": 507, "end": 529, "text": "(Barcala et al., 2018)", "ref_id": "BIBREF5" }, { "start": 574, "end": 592, "text": "(van Eerten, 2007)", "ref_id": "BIBREF18" }, { "start": 793, "end": 813, "text": "Andrus et al. (2021)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "Even where demographic information is already present in ASR evaluation sets, it can be a valuable to conduct an analysis of the target user base for a deployed ASR system in order to ensure that all relevant tags are available. For example, if a data set has labels for four distinct accents, but the target user base is known from sociolinguistic research to use six distinct accents, this gap will not necessarily be evident when running an analysis of any possible differences among the four accents for which tags are available. It is important to understand the sociolinguistic characteristics of the target user base, and to cover as many of these properties as possible. Given that language has almost infinite variation as you zoom in-in the extreme, everyone has a slightly different voice-this is a task that requires careful sociolinguistic judgement and analysis, calling for interdisciplinary collaboration between linguists and developers of ASR systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "Even when a rich set of tags is available, it can be difficult to interpret the results. We describe a simple, metric-independent population-weighted visualization framework designed to evaluate ASR systems based on such demographic metadata. Our approach supports the different language variations outlined above, and we propose this analyses as a valuable addition to future benchmarks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demographically Informed Quality", "sec_num": "4" }, { "text": "Factors like accents (native or non-native), dialects, gender, and others can result in linguistic variation, and this may in turn impact ASR performance. Thus it can be valuable to calculate WER, latency, and other metrics not just on a data set as a whole, but s also to slice metrics based on such meta-linguistic parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Population-Weighted Slicing Framework", "sec_num": "4.1" }, { "text": "Such sliced metrics can be used to determine any performance gap between groups, and if so, what efforts may need to be undertaken to shrink such gaps. The ideal test set should be representative of the target user base, but as this may be hard to achieve at data collection time, it can make sense to re-weight any metrics based on real-world population statistics: for example, imagine a scenario where 98% of the recordings in a data set come from native speakers, with the remaining 2% coming from non-native speakers. If the target deployment setting involves more like 15% non-native speech, the metrics obtained over the 2% slice of the data set coming from non-native speakers should carry 15% of the weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Population-Weighted Slicing Framework", "sec_num": "4.1" }, { "text": "To make such analyses easier, we propose subdividing all speakers into mutually exclusive groups based on relevant linguistic or demographic criteria. For example, consider a scenario where the real-world population is subdivided into 3 mutually exclusive groups: group A (60% of the population), group B (30%), and group C (10%). The two subplots of Figure 1 visualize examples of evaluations of two ASR models for slices corresponding to these groups, with the WER scores represented by the height of the bars, and the width of the bars reflecting the size of the groups.", "cite_spans": [], "ref_spans": [ { "start": 351, "end": 359, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Population-Weighted Slicing Framework", "sec_num": "4.1" }, { "text": "Even in the actual test data set, group A covers 80% of the test data, with groups B and C accounting for 10% each (i.e. under-representing group B and over-representing group A), this population-weighted framework provides an intuitive way to address this imbalance, and understand how ASR systems perform in the face of linguistic diversity. The average WER of the system can be calculated as an average of all WER scores across population groups, weighted according to the size of those groups-which may differ from the WER obtained by simply calculating the WER on the actual data set, as we have re-weighted based on the real-world distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Population-Weighted Slicing Framework", "sec_num": "4.1" }, { "text": "Importantly, while the average weighted WER is a useful metric, the full distribution should still be understood: continuing the example depicted on Figure 1 , the average WER for both scenarios in this case would be 10 4 , but the disparity between the various groups in the plot where group C achieves a WER of 19.3% is clearly much bigger in one scenario than another.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 157, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Population-Weighted Slicing Framework", "sec_num": "4.1" }, { "text": "Given WER measurements for several groups of speakers, we should also measure the disparity of the ASR performance across various groups. In a simplified way, one could calculate the difference between the best-performing and the worst-performing groups, but see Mitchell et al. (2020) for a general discussion of ML fairness metrics. While the WER gap in the best-group and the worst-performing group for the scenario depicted on the second subplot of Figure 1 is 3.5 absolute points, the gap is 12.8 absolute points for the distribution on the first subfigure-despite these two systems having the same average WER, one system is clearly more consistent than another.", "cite_spans": [ { "start": 263, "end": 285, "text": "Mitchell et al. (2020)", "ref_id": null } ], "ref_spans": [ { "start": 453, "end": 461, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Population-Weighted Slicing Framework", "sec_num": "4.1" }, { "text": "Slicing can be based on just a single parameter, such as accent, gender, or age, but in reality, speakers are likely to fall into several categories at once. Therefore, it may make sense to look at intersectional groups: for example, ASR performance of 20-30 years old female speakers of Chicano English from Miami. Obtaining such rich metadata, however, may be challenging. Also, the more groups we intersect, the stronger the effect of data sparsity becomes: it may be challenging to fill every bucket with enough samples to obtain solid statistics and to control for all other variables not considered. At any rate, as long as mutually exclusive groups can be defined-whether based on a single parameter or in an intersectional way-this framework can help provide a more thorough understanding of various ASR metrics. Weighting by population also allows re-balancing potentially unbalanced test sets, and gives insight into what kinds of ASR performance would be encountered by different groups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Population-Weighted Slicing Framework", "sec_num": "4.1" }, { "text": "The goal of this approach is to generate new insights into the ASR accuracy for each slice without making assumptions about the causal interaction between the underlying latent variables. The analytical methods we discuss here are much more detailed than what is commonly employed for ASR system evaluation nowadays, but this level of detail is more usual in the field of variationist sociolinguistics, suggesting potential for future collaborations (Labov, 1990; Grama et al., 2019) .", "cite_spans": [ { "start": 450, "end": 463, "text": "(Labov, 1990;", "ref_id": "BIBREF43" }, { "start": 464, "end": 483, "text": "Grama et al., 2019)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Population-Weighted Slicing Framework", "sec_num": "4.1" }, { "text": "To evaluate the ASR systems in a framework that we are proposing, it is crucial to define representative and mutually exclusive slices. While the classification we suggest in this section is by no means exhaustive, it can be used as a starting point.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining slices", "sec_num": "4.2" }, { "text": "Regional language variation Many languages have regional language variation. For example, in the United States alone, there are 3 main regional groups of dialects: the Inland North, the South, and the West (Labov, 1991) , with multiple cities developing their own regional language variants. Such regional variants may involve regional phonology ('get' rhymes with 'vet' in the North, and with 'fit' in the South), and even significant lexical and syntactic differences ('going/planning to' can be expressed as 'fixin' to' in the South). Aks\u00ebnova et al. (2020) has shown how such regional variation can be explored, and how it can impact ASR performance. Ideally, then, as many regional variants as possible should be covered by the ideal benchmark for a given language.", "cite_spans": [ { "start": 206, "end": 219, "text": "(Labov, 1991)", "ref_id": "BIBREF44" }, { "start": 538, "end": 560, "text": "Aks\u00ebnova et al. (2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Defining slices", "sec_num": "4.2" }, { "text": "Sociolects Along with regional differences, there may also also linguistic diversity introduced by speakers of various sociolects: in American English, one might think of AAVE, Chicano (Mexican-American) English, and others. For example, AAVE-covered by the CORAAL data set (Kendall and Farrington, 2020) -has distinctive syntactic constructions such as habitual be ('She be working') and perfective done ('He done run'), along with systematic phonological differences (Wolfram, 2004) . And even within a single sociolect such as AAVE there might be linguistic diversity . Sociolects may impact ASR quality (Koenecke et al., 2020) , and it would therefore be desirable for benchmarks to cover as many sociolects as possible.", "cite_spans": [ { "start": 274, "end": 304, "text": "(Kendall and Farrington, 2020)", "ref_id": "BIBREF38" }, { "start": 469, "end": 484, "text": "(Wolfram, 2004)", "ref_id": "BIBREF86" }, { "start": 607, "end": 630, "text": "(Koenecke et al., 2020)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Defining slices", "sec_num": "4.2" }, { "text": "L2 background Speech produced by non-native (L2) may reflect some characteristics of their native (L1) language (Bloem et al., 2016) , making it important to measure the impact of L2 accents on ASR accuracy. One relevant data set for English is the GMU Speech Accent Archive Weinberger (2015), which collects such data for L2 speakers of English.", "cite_spans": [ { "start": 112, "end": 132, "text": "(Bloem et al., 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Defining slices", "sec_num": "4.2" }, { "text": "Gender, age, and pitch Recognition performance may vary depending on the gender or age of the speaker (Liao et al., 2015; Tatman, 2017; Tatman and Kasten, 2017; Feng et al., 2021) . In some cases, as in Common Voice (Ardila et al., 2020; Hazirbas et al., 2021) , self-reported metadata is available. Where such information is not available, it may make sense to fall back to a proxy analysis based on pitch-which is known to be correlated with factors such as age and gender-in order to understand whether there are recognition accuracy differences for various pitch buckets, as in Liao et al. (2015) .", "cite_spans": [ { "start": 102, "end": 121, "text": "(Liao et al., 2015;", "ref_id": "BIBREF48" }, { "start": 122, "end": 135, "text": "Tatman, 2017;", "ref_id": "BIBREF82" }, { "start": 136, "end": 160, "text": "Tatman and Kasten, 2017;", "ref_id": "BIBREF83" }, { "start": 161, "end": 179, "text": "Feng et al., 2021)", "ref_id": null }, { "start": 216, "end": 237, "text": "(Ardila et al., 2020;", "ref_id": "BIBREF4" }, { "start": 238, "end": 260, "text": "Hazirbas et al., 2021)", "ref_id": null }, { "start": 582, "end": 600, "text": "Liao et al. (2015)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Defining slices", "sec_num": "4.2" }, { "text": "Speech impairments Accuracy rates of standard ASR systems may also degrade for speech produced by people with speech impairments. Recent work has investigated ways to collect relevant data (Grill and Tu\u010dkov\u00e1, 2016; Park et al., 2021) , enabling analyses of ASR systems in this area. However, given the high degree of variability in this space, a more robust path at least for the near-term future may be designing personalized ASR systems for people with non-standard speech (Shor et al., 2019) . Beyond speech impairments, voice technologies could bring benefits to people with various types of diseases and impairments such as Alzheimer's, Parkinson's, and hearing loss.", "cite_spans": [ { "start": 189, "end": 214, "text": "(Grill and Tu\u010dkov\u00e1, 2016;", "ref_id": "BIBREF27" }, { "start": 215, "end": 233, "text": "Park et al., 2021)", "ref_id": "BIBREF64" }, { "start": 475, "end": 494, "text": "(Shor et al., 2019)", "ref_id": "BIBREF77" } ], "ref_spans": [], "eq_spans": [], "section": "Defining slices", "sec_num": "4.2" }, { "text": "The ultimate goal of benchmarking should be the ability to predict how well an ASR system is going to generalize to new and unseen data. In the previous sections we have argued that a single aggregate statistic like the average WER can be too coarsegrained for describing the accuracy in a real-world deployment that targets multiple sociolinguistic slices of the population. Ideally, the insights generated by the proposed analysis would be actionable, from the composition of the training data to fine-grained twiddling with a clear objective function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Before we conclude, we should point out that any benchmark that implemented even a fraction of the metrics outlined above would yield rich amounts of information-which will likely pose challenges in terms of organizing, presenting, and understanding all this material. Model report cards, as outlined by Mitchell et al. (2019) , may be a natural way to capture this information for an ASR system-although we would suggest calling them system report cards instead, given that most ASR systems do not consist solely of a single monolithic model. Given the sheer amount of variation in the ways in which people speak, and a large number of technical factors, measuring ASR systems is a complicated task. Today's benchmarks clearly leave room for improvement, whether it is through covering more horizontal domains (different kinds of speech), measuring the impact of cross-cutting vertical issues (e.g. factors like background noise), using more metrics than just WER (e.g. latency), and including demographic characteristics. We hope that our survey of these areas, and the simple population-weighted visualization framework we introduced, can help improve future benchmarks-not just for English, but also for the thousands of other languages spoken in our world today. This will clearly be a long-term journey, but it will be very important for the field as a whole to find ways to measure ASR systems better as speech recognition research continues to advance.", "cite_spans": [ { "start": 304, "end": 326, "text": "Mitchell et al. (2019)", "ref_id": "BIBREF58" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Transcription of audiobooks is a primary goal of Librispeech(Panayotov et al., 2015), one of the most common benchmarks for ASR today, even though practically speaking, transcribing audiobook audio is not a common task for most real-world ASR systems-given that audiobooks are typically produced based on an existing 'transcription', namely the ground-truth written text of the book.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Top subplot: 6.5*0.6 + 13.9*0.3 + 19.3*0.1 = 10; bottom subplot: 8.9*0.6 + 11.4*0.3 + 12.4*0.1 = 10;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank our colleagues on the Google Speech team for many thoughtful discussions on this topic, especially Petar Aleksic, Geoff Fischer, Jonas Fromseier Mortensen, David Garcia, Millie Holt, Pedro J. Moreno, Pat Rondon, Benyah Shaparenko, and Eugene Weinstein.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "6" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithmic exploration of American English dialects", "authors": [ { "first": "Al\u00ebna", "middle": [], "last": "Aks\u00ebnova", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bruguier", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Ritchart-Scott", "suffix": "" }, { "first": "Uri", "middle": [], "last": "Mendlovic", "suffix": "" } ], "year": 2020, "venue": "Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al\u00ebna Aks\u00ebnova, Antoine Bruguier, Amanda Ritchart- Scott, and Uri Mendlovic. 2020. Algorithmic exploration of American English dialects. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bringing contextual information to Google speech recognition", "authors": [ { "first": "Petar", "middle": [], "last": "Aleksic", "suffix": "" }, { "first": "Mohammadreza", "middle": [], "last": "Ghodsi", "suffix": "" }, { "first": "Assaf", "middle": [], "last": "Michaely", "suffix": "" }, { "first": "Cyril", "middle": [], "last": "Allauzen", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" }, { "first": "David", "middle": [], "last": "Rybach", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Moreno", "suffix": "" } ], "year": 2015, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "468--472", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petar Aleksic, Mohammadreza Ghodsi, Assaf Michaely, Cyril Allauzen, Keith Hall, Brian Roark, David Rybach, and Pedro Moreno. 2015. Bringing contextual information to Google speech recognition. In Proc. Interspeech 2015, pages 468-472, Dresden, Germany.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "What we can't measure, we can't understand: Challenges to demographic data procurement in the pursuit of fairness", "authors": [ { "first": "Mckane", "middle": [], "last": "Andrus", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Spitzer", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Alice", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2021, "venue": "Proc. ACM Conference on Fairness, Accountability, and Transparency", "volume": "", "issue": "", "pages": "249--260", "other_ids": { "DOI": [ "10.1145/3442188.3445888" ] }, "num": null, "urls": [], "raw_text": "McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang. 2021. What we can't measure, we can't under- stand: Challenges to demographic data procurement in the pursuit of fairness. In Proc. ACM Conference on Fairness, Accountability, and Transparency, page 249-260.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Findings of the IWSLT 2020 evaluation campaign", "authors": [ { "first": "Ebrahim", "middle": [], "last": "Ansari", "suffix": "" } ], "year": 2020, "venue": "Proc. International Conference on Spoken Language Translation", "volume": "", "issue": "", "pages": "1--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ebrahim Ansari et al. 2020. Findings of the IWSLT 2020 evaluation campaign. In Proc. International Confer- ence on Spoken Language Translation, pages 1-34.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Common voice: A massively-multilingual speech corpus", "authors": [ { "first": "Rosana", "middle": [], "last": "Ardila", "suffix": "" }, { "first": "Megan", "middle": [], "last": "Branson", "suffix": "" }, { "first": "Kelly", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Kohler", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Henretty", "suffix": "" }, { "first": "Reuben", "middle": [], "last": "Morais", "suffix": "" }, { "first": "Lindsay", "middle": [], "last": "Saunders", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "Gregor", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2020, "venue": "Proc. Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "4218--4222", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. Common voice: A massively-multilingual speech corpus. In Proc. Language Resources and Evaluation Conference, pages 4218-4222, Marseille, France.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "El corpus ESLORA de espa\u00f1ol oral: dise\u00f1o, desarrollo y explotaci\u00f3n", "authors": [ { "first": "Mario", "middle": [], "last": "Barcala", "suffix": "" }, { "first": "Eva", "middle": [], "last": "Dom\u00ednguez", "suffix": "" }, { "first": "Alba", "middle": [], "last": "Fern\u00e1ndez", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Rivas", "suffix": "" }, { "first": "M", "middle": [ "Paula" ], "last": "Santalla", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "V\u00e1zquez", "suffix": "" }, { "first": "Rebeca", "middle": [], "last": "Villapol", "suffix": "" } ], "year": 2018, "venue": "", "volume": "5", "issue": "", "pages": "217--237", "other_ids": { "DOI": [ "10.15366/chimera2018.5.2.003" ] }, "num": null, "urls": [], "raw_text": "Mario Barcala, Eva Dom\u00ednguez, Alba Fern\u00e1ndez, Raquel Rivas, M. Paula Santalla, Victoria V\u00e1zquez, and Rebeca Villapol. 2018. El corpus ESLORA de espa\u00f1ol oral: dise\u00f1o, desarrollo y explotaci\u00f3n. CHIMERA: Revista de Corpus de Lenguas Romances y Estudios Ling\u00fc\u00edsticos, 5(2):217-237.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Effectively building tera scale maxent language models incorporating non-linguistic signals", "authors": [ { "first": "Fadi", "middle": [], "last": "Biadsy", "suffix": "" }, { "first": "Mohammadreza", "middle": [], "last": "Ghodsi", "suffix": "" }, { "first": "Diamantino", "middle": [], "last": "Caseiro", "suffix": "" } ], "year": 2017, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "2710--2714", "other_ids": { "DOI": [ "10.21437/Interspeech.2017-1203" ] }, "num": null, "urls": [], "raw_text": "Fadi Biadsy, Mohammadreza Ghodsi, and Diamantino Caseiro. 2017. Effectively building tera scale maxent language models incorporating non-linguistic sig- nals. In Proc. Interspeech 2017, pages 2710-2714, Stockholm, Sweden.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Representativeness in corpus design", "authors": [ { "first": "Douglas", "middle": [], "last": "Biber", "suffix": "" } ], "year": 1993, "venue": "Literary and Linguistic Computing", "volume": "8", "issue": "4", "pages": "243--257", "other_ids": { "DOI": [ "10.1093/llc/8.4.243" ] }, "num": null, "urls": [], "raw_text": "Douglas Biber. 1993. Representativeness in corpus design. Literary and Linguistic Computing, 8(4):243-257.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Future of Dialects: Automatically identifying characteristic features of non-native English accents, chapter 9", "authors": [ { "first": "Jelke", "middle": [], "last": "Bloem", "suffix": "" }, { "first": "Martijn", "middle": [], "last": "Wieling", "suffix": "" }, { "first": "John", "middle": [], "last": "Nerbonne", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jelke Bloem, Martijn Wieling, and John Nerbonne. 2016. The Future of Dialects: Automatically identifying characteristic features of non-native English accents, chapter 9. Language Science Press, Berlin, Germany.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning personalized pronunciations for contact name recognition", "authors": [ { "first": "Antoine", "middle": [], "last": "Bruguier", "suffix": "" }, { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Fran\u00e7oise", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2016, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "3096--3100", "other_ids": { "DOI": [ "10.21437/Interspeech.2016-537" ] }, "num": null, "urls": [], "raw_text": "Antoine Bruguier, Fuchun Peng, and Fran\u00e7oise Beaufays. 2016. Learning personalized pronunciations for contact name recognition. In Proc. Interspeech 2016, pages 3096-3100, San Francisco, CA, USA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic recognition of spontaneous speech for access to multilingual oral history archives", "authors": [ { "first": "William", "middle": [], "last": "Byrne", "suffix": "" }, { "first": "David", "middle": [], "last": "Doermann", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Gustman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Oard", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Picheny", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Psutka", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "Dagobert", "middle": [], "last": "Soergel", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2004, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "12", "issue": "4", "pages": "420--435", "other_ids": { "DOI": [ "10.1109/TSA.2004.828702" ] }, "num": null, "urls": [], "raw_text": "William Byrne, David Doermann, Martin Franz, Samuel Gustman, Jan Haji\u010d, Douglas Oard, Michael Picheny, Josef Psutka, Bhuvana Ramabhadran, Dagobert So- ergel, Todd Ward, and Wei-Jing Zhu. 2004. Automatic recognition of spontaneous speech for access to multilingual oral history archives. IEEE Transactions on Speech and Audio Processing, 12(4):420-435.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "CALLHOME American English Speech LDC97S42. Web Download. Philadelphia: Linguistic Data Consortium", "authors": [ { "first": "Alexandra", "middle": [], "last": "Canavan", "suffix": "" }, { "first": "David", "middle": [], "last": "Graff", "suffix": "" }, { "first": "George", "middle": [], "last": "Zipperlen", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandra Canavan, David Graff, and George Zipperlen. 1997. CALLHOME American English Speech LDC97S42. Web Download. Philadelphia: Linguistic Data Consortium.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The AMI meeting corpus: A pre-announcement", "authors": [ { "first": "Jean", "middle": [], "last": "Carletta", "suffix": "" } ], "year": 2005, "venue": "International Workshop on Machine Learning for Multimodal Interaction", "volume": "", "issue": "", "pages": "28--39", "other_ids": { "DOI": [ "https://link.springer.com/chapter/10.1007/11677482_3" ] }, "num": null, "urls": [], "raw_text": "Jean Carletta et al. 2005. The AMI meeting corpus: A pre-announcement. In International Workshop on Machine Learning for Multimodal Interaction, pages 28-39, Edinburgh, United Kingdom.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Audio adversarial examples: Targeted attacks on speech-to-text", "authors": [ { "first": "Nicholas", "middle": [], "last": "Carlini", "suffix": "" }, { "first": "David", "middle": [], "last": "Wagner", "suffix": "" } ], "year": 2018, "venue": "IEEE Security and Privacy Workshops (SPW)", "volume": "", "issue": "", "pages": "1--7", "other_ids": { "DOI": [ "10.1109/SPW.2018.00009" ] }, "num": null, "urls": [], "raw_text": "Nicholas Carlini and David Wagner. 2018. Audio adver- sarial examples: Targeted attacks on speech-to-text. In IEEE Security and Privacy Workshops (SPW), pages 1-7, San Francisco, CA, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Speech recognition for medical conversations", "authors": [ { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Anshuman", "middle": [], "last": "Tripathi", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Chou", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Co", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Jaunzeikare", "suffix": "" }, { "first": "Anjuli", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Hasim", "middle": [], "last": "Sak", "suffix": "" }, { "first": "Ananth", "middle": [], "last": "Sankar", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Tansuwan", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Xuedong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Proc", "volume": "", "issue": "", "pages": "2972--2976", "other_ids": { "DOI": [ "10.21437/Interspeech.2018-40" ] }, "num": null, "urls": [], "raw_text": "Chung-Cheng Chiu, Anshuman Tripathi, Katherine Chou, Chris Co, Navdeep Jaitly, Diana Jaunzeikare, Anjuli Kannan, Patrick Nguyen, Hasim Sak, Ananth Sankar, Justin Tansuwan, Nathan Wan, Yonghui Wu, and Xuedong Zhang. 2018. Speech recognition for medical conversations. In Proc. Interspeech 2018, pages 2972-2976, Hyderabad, India.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The Fisher corpus: a resource for the next generations of speech-to-text", "authors": [ { "first": "Christopher", "middle": [], "last": "Cieri", "suffix": "" }, { "first": "David", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2004, "venue": "Proc. International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "69--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Cieri, David Miller, and Kevin Walker. 2004. The Fisher corpus: a resource for the next generations of speech-to-text. In Proc. International Conference on Language Resources and Evaluation, pages 69-71, Lisbon, Portugal.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "000 podcasts: A spoken English document corpus", "authors": [ { "first": "Ann", "middle": [], "last": "Clifton", "suffix": "" }, { "first": "Sravana", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Yongze", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Aasish", "middle": [], "last": "Pappu", "suffix": "" }, { "first": "Rezvaneh", "middle": [], "last": "Rezapour", "suffix": "" }, { "first": "Hamed", "middle": [], "last": "Bonab", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Eskevich", "suffix": "" }, { "first": "Gareth", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Jussi", "middle": [], "last": "Karlgren", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Carterette", "suffix": "" }, { "first": "Rosie", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2020, "venue": "Proc. International Conference on Computational Linguistics", "volume": "100", "issue": "", "pages": "5903--5917", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.519" ] }, "num": null, "urls": [], "raw_text": "Ann Clifton, Sravana Reddy, Yongze Yu, Aasish Pappu, Rezvaneh Rezapour, Hamed Bonab, Maria Eskevich, Gareth Jones, Jussi Karlgren, Ben Carterette, and Rosie Jones. 2020. 100,000 podcasts: A spoken English document corpus. In Proc. International Conference on Computational Linguistics, pages 5903-5917, Barcelona, Spain.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Individual variation in language attitudes toward voice-AI: The role of listeners' autistic-like traits", "authors": [ { "first": "Michelle", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Melina", "middle": [], "last": "Sarian", "suffix": "" }, { "first": "Kristin", "middle": [], "last": "Predeck", "suffix": "" }, { "first": "Georgia", "middle": [], "last": "Zellou", "suffix": "" } ], "year": 2020, "venue": "Proc. Interspeech 2020", "volume": "", "issue": "", "pages": "1813--1817", "other_ids": { "DOI": [ "10.21437/Interspeech.2020-1339" ] }, "num": null, "urls": [], "raw_text": "Michelle Cohn, Melina Sarian, Kristin Predeck, and Georgia Zellou. 2020. Individual variation in lan- guage attitudes toward voice-AI: The role of listeners' autistic-like traits. In Proc. Interspeech 2020, pages 1813-1817, Shanghai, China.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Over het corpus gesproken Nederlands", "authors": [ { "first": "Laura", "middle": [], "last": "Van Eerten", "suffix": "" } ], "year": 2007, "venue": "Nederlandse Taalkunde", "volume": "12", "issue": "3", "pages": "194--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura van Eerten. 2007. Over het corpus gesproken Nederlands. Nederlandse Taalkunde, 12(3):194-215.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Sources of variation in the speech of African Americans: Perspectives from sociophonetics", "authors": [ { "first": "Charlie", "middle": [], "last": "Farrington", "suffix": "" }, { "first": "Sharese", "middle": [], "last": "King", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Kohn", "suffix": "" } ], "year": 2020, "venue": "WIREs Cognitive Science", "volume": "12", "issue": "3", "pages": "1--17", "other_ids": { "DOI": [ "10.1002/wcs.1550" ] }, "num": null, "urls": [], "raw_text": "Charlie Farrington, Sharese King, and Mary Kohn. 2020. Sources of variation in the speech of African Americans: Perspectives from sociophonetics. WIREs Cognitive Science, 12(3):1-17.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bence Mark Halpern, and Odette Scharenborg. 2021. Quantifying bias in automatic speech recognition", "authors": [ { "first": "Siyuan", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Olya", "middle": [], "last": "Kudina", "suffix": "" } ], "year": null, "venue": "Proc. Interspeech 2021 (submitted)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siyuan Feng, Olya Kudina, Bence Mark Halpern, and Odette Scharenborg. 2021. Quantifying bias in auto- matic speech recognition. In Proc. Interspeech 2021 (submitted), Brno, Czech Republic.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Effects of speaking rate and word frequency on pronunciations in conversational speech", "authors": [ { "first": "Eric", "middle": [], "last": "Fosler", "suffix": "" }, { "first": "-", "middle": [], "last": "Lussier", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Morgan", "suffix": "" } ], "year": 1999, "venue": "Speech Communication", "volume": "29", "issue": "2", "pages": "137--158", "other_ids": { "DOI": [ "10.1016/S0167-6393(99)00035-7" ] }, "num": null, "urls": [], "raw_text": "Eric Fosler-Lussier and Nelson Morgan. 1999. Effects of speaking rate and word frequency on pronunciations in conversational speech. Speech Communication, 29(2):137-158.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "TIMIT acousticphonetic continuous speech corpus LDC93S1. Web Download. Philadelphia: Linguistic Data Consortium", "authors": [ { "first": "John", "middle": [ "S" ], "last": "Garofolo", "suffix": "" }, { "first": "Lori", "middle": [ "F" ], "last": "Lamel", "suffix": "" }, { "first": "William", "middle": [ "M" ], "last": "Fisher", "suffix": "" }, { "first": "Jonathan", "middle": [ "G" ], "last": "Fiscus", "suffix": "" }, { "first": "David", "middle": [ "S" ], "last": "Pallett", "suffix": "" }, { "first": "Nancy", "middle": [ "L" ], "last": "Dahlgren", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Zue", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, and Victor Zue. 1993. TIMIT acoustic- phonetic continuous speech corpus LDC93S1. Web Download. Philadelphia: Linguistic Data Consortium.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Audio Set: An ontology and human-labeled dataset for audio events", "authors": [ { "first": "F", "middle": [], "last": "Jort", "suffix": "" }, { "first": "", "middle": [], "last": "Gemmeke", "suffix": "" }, { "first": "P", "middle": [ "W" ], "last": "Daniel", "suffix": "" }, { "first": "Dylan", "middle": [], "last": "Ellis", "suffix": "" }, { "first": "Aren", "middle": [], "last": "Freedman", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "R", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "Manoj", "middle": [], "last": "Moore", "suffix": "" }, { "first": "Marvin", "middle": [], "last": "Plakal", "suffix": "" }, { "first": "", "middle": [], "last": "Ritter", "suffix": "" } ], "year": 2017, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "776--780", "other_ids": { "DOI": [ "10.1109/ICASSP.2017.7952261" ] }, "num": null, "urls": [], "raw_text": "Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio Set: An ontology and human-labeled dataset for audio events. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 776-780, New Orleans, LA, USA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "SWITCHBOARD: Telephone speech corpus for research and development", "authors": [ { "first": "John", "middle": [ "J" ], "last": "Godfrey", "suffix": "" }, { "first": "Edward", "middle": [ "C" ], "last": "Holliman", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Mcdaniel", "suffix": "" } ], "year": 1992, "venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "517--520", "other_ids": { "DOI": [ "10.1109/ICASSP.1992.225858" ] }, "num": null, "urls": [], "raw_text": "John J. Godfrey, Edward C. Holliman, and Jane McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for research and development. In IEEE International Con- ference on Acoustics, Speech, and Signal Processing (ICASSP), pages 517-520, San Francisco, CA, USA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Cross domain automatic transcription on the TC-STAR EPPS corpus", "authors": [ { "first": "Christian", "middle": [], "last": "Gollan", "suffix": "" }, { "first": "Maximilian", "middle": [], "last": "Bisani", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Kanthak", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Schl\u00fcter", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2005, "venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "825--828", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Gollan, Maximilian Bisani, Stephan Kanthak, Ralf Schl\u00fcter, and Hermann Ney. 2005. Cross domain automatic transcription on the TC-STAR EPPS corpus. In IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 825-828, Philadelphia, PA, USA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Initiation, progression, and conditioning of the short-front vowel shift in Australia", "authors": [ { "first": "James", "middle": [], "last": "Grama", "suffix": "" }, { "first": "Catherine", "middle": [ "E" ], "last": "Travis", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Gonzalez", "suffix": "" } ], "year": 2019, "venue": "Proc. International Congress of Phonetic Sciences (ICPhS)", "volume": "", "issue": "", "pages": "1769--1773", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Grama, Catherine E. Travis, and Simon Gonzalez. 2019. Initiation, progression, and conditioning of the short-front vowel shift in Australia. In Proc. International Congress of Phonetic Sciences (ICPhS), pages 1769-1773, Melbourne, Australia.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Speech databases of typical children and children with SLI", "authors": [ { "first": "Pavel", "middle": [], "last": "Grill", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Tu\u010dkov\u00e1", "suffix": "" } ], "year": 2016, "venue": "PLOS ONE", "volume": "11", "issue": "3", "pages": "1--21", "other_ids": { "DOI": [ "10.1371/journal.pone.0150365" ] }, "num": null, "urls": [], "raw_text": "Pavel Grill and Jana Tu\u010dkov\u00e1. 2016. Speech databases of typical children and children with SLI. PLOS ONE, 11(3):1-21.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "What size test set gives good error rate estimates?", "authors": [ { "first": "Isabelle", "middle": [], "last": "Guyon", "suffix": "" }, { "first": "John", "middle": [], "last": "Makhoul", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Vapnik", "suffix": "" } ], "year": 1998, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "20", "issue": "1", "pages": "52--64", "other_ids": { "DOI": [ "10.1109/34.655649" ] }, "num": null, "urls": [], "raw_text": "Isabelle Guyon, John Makhoul, Richard Schwartz, and Vladimir Vapnik. 1998. What size test set gives good error rate estimates? IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):52-64.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Jacqueline Pan, Albert Gordo, and Cristian Canton Ferrer. 2021. Towards measuring fairness in AI: the Casual Conversations dataset", "authors": [ { "first": "Caner", "middle": [], "last": "Hazirbas", "suffix": "" }, { "first": "Joanna", "middle": [], "last": "Bitton", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Dolhansky", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacque- line Pan, Albert Gordo, and Cristian Canton Ferrer. 2021. Towards measuring fairness in AI: the Casual Conversations dataset.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Speech recognition of different sampling rates using fractal code descriptor", "authors": [ { "first": "Rattaphon", "middle": [], "last": "Hokking", "suffix": "" }, { "first": "Kuntpong", "middle": [], "last": "Woraratpanya", "suffix": "" }, { "first": "Yoshimitsu", "middle": [], "last": "Kuroki", "suffix": "" } ], "year": 2016, "venue": "Proc. International Joint Conference on Computer Science and Software Engineering (JCSSE)", "volume": "", "issue": "", "pages": "1--5", "other_ids": { "DOI": [ "10.1109/JCSSE.2016.7748895" ] }, "num": null, "urls": [], "raw_text": "Rattaphon Hokking, Kuntpong Woraratpanya, and Yoshimitsu Kuroki. 2016. Speech recognition of different sampling rates using fractal code descriptor. In Proc. International Joint Conference on Computer Science and Software Engineering (JCSSE), pages 1-5, Khon Kaen, Thailand.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Yann Dauphin, and Andrea Frome. 2020a. What do compressed deep neural networks forget?", "authors": [ { "first": "Sara", "middle": [], "last": "Hooker", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Clark", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, and Andrea Frome. 2020a. What do compressed deep neural networks forget?", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Characterising bias in compressed models", "authors": [ { "first": "Sara", "middle": [], "last": "Hooker", "suffix": "" }, { "first": "Nyalleng", "middle": [], "last": "Moorosi", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Denton", "suffix": "" } ], "year": 2020, "venue": "Proc. ICML Workshop on Human Interpretability in Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Hooker, Nyalleng Moorosi, Gregory Clark, Samy Bengio, and Emily Denton. 2020b. Characterising bias in compressed models. In Proc. ICML Workshop on Human Interpretability in Machine Learning.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Structure of pauses in speech in the context of speaker verification and classification of speech type", "authors": [ { "first": "Magdalena", "middle": [], "last": "Igras-Cybulska", "suffix": "" }, { "first": "Bartosz", "middle": [], "last": "Zi\u00f3\u0142ko", "suffix": "" }, { "first": "Piotr\u017celasko", "middle": [], "last": "", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Witkowski", "suffix": "" } ], "year": 2016, "venue": "Journal on Audio, Speech, and Music Processing", "volume": "", "issue": "18", "pages": "1--16", "other_ids": { "DOI": [ "10.1186/s13636-016-0096-7" ] }, "num": null, "urls": [], "raw_text": "Magdalena Igras-Cybulska, Bartosz Zi\u00f3\u0142ko, Piotr\u017belasko, and Marcin Witkowski. Structure of pauses in speech in the context of speaker verification and classification of speech type. Journal on Audio, Speech, and Music Processing, 2016(18):1-16.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Comparative analysis and review of interactive voice response systems", "authors": [ { "first": "A", "middle": [], "last": "Itorobong", "suffix": "" }, { "first": "Ambrose", "middle": [ "A" ], "last": "Inam", "suffix": "" }, { "first": "Olawande", "middle": [], "last": "Azeta", "suffix": "" }, { "first": "", "middle": [], "last": "Daramola", "suffix": "" } ], "year": 2017, "venue": "Proc. Conference on Information Communication Technology and Society (ICTAS)", "volume": "", "issue": "", "pages": "1--6", "other_ids": { "DOI": [ "10.1109/ICTAS.2017.7920660" ] }, "num": null, "urls": [], "raw_text": "Itorobong A. Inam, Ambrose A. Azeta, and Olawande Daramola. 2017. Comparative analysis and review of interactive voice response systems. In Proc. Confer- ence on Information Communication Technology and Society (ICTAS), pages 1-6, Durban, South Africa.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "End-to-end speech recognition and disfluency removal", "authors": [ { "first": "Jamshid", "middle": [], "last": "Paria", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Lou", "suffix": "" }, { "first": "", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2051--2061", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.186" ] }, "num": null, "urls": [], "raw_text": "Paria Jamshid Lou and Mark Johnson. 2020. End-to-end speech recognition and disfluency removal. In Find- ings of the Association for Computational Linguistics: EMNLP 2020, pages 2051-2061.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Elizabeth Shriberg, Andreas Stolcke, and Chuck Wooters. 2004. ICSI Meeting Speech LDC2004S02. Web Download. Philadelphia: Linguistic Data Consortium", "authors": [ { "first": "Adam", "middle": [], "last": "Janin", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Edwards", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Ellis", "suffix": "" }, { "first": "David", "middle": [], "last": "Gelbart", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Morgan", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Peskin", "suffix": "" }, { "first": "Thilo", "middle": [], "last": "Pfau", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Janin, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, and Chuck Wooters. 2004. ICSI Meeting Speech LDC2004S02. Web Download. Philadelphia: Linguistic Data Consortium.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Large-scale pre-training of end-to-end multitalker ASR for meeting transcription with single distant microphone", "authors": [ { "first": "Naoyuki", "middle": [], "last": "Kanda", "suffix": "" }, { "first": "Guoli", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yashesh", "middle": [], "last": "Gaur", "suffix": "" }, { "first": "Xiaofei", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhong", "middle": [], "last": "Meng", "suffix": "" }, { "first": "Zhuo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Takuya", "middle": [], "last": "Yoshioka", "suffix": "" } ], "year": 2021, "venue": "Proc. Interspeech 2021 (submitted)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naoyuki Kanda, Guoli Ye, Yu Wu, Yashesh Gaur, Xiaofei Wang, Zhong Meng, Zhuo Chen, and Takuya Yoshioka. 2021. Large-scale pre-training of end-to-end multi- talker ASR for meeting transcription with single distant microphone. In Proc. Interspeech 2021 (submitted), Brno, Czech Republic.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "The corpus of regional African American language. Version 2020.05. Eugene, OR: The Online Resources for African American Language Project", "authors": [ { "first": "Tyler", "middle": [], "last": "Kendall", "suffix": "" }, { "first": "Charlie", "middle": [], "last": "Farrington", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tyler Kendall and Charlie Farrington. 2020. The corpus of regional African American language. Version 2020.05. Eugene, OR: The Online Resources for African American Language Project.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Attentional speech recognition models misbehave on out-of-domain utterances", "authors": [ { "first": "Phillip", "middle": [], "last": "Keung", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Yichao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Salazar", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Bhardwaj", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phillip Keung, Wei Niu, Yichao Lu, Julian Salazar, and Vikas Bhardwaj. 2020. Attentional speech recognition models misbehave on out-of-domain utterances.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Improving noise robust automatic speech recognition with single-channel timedomain enhancement network", "authors": [ { "first": "Keisuke", "middle": [], "last": "Kinoshita", "suffix": "" }, { "first": "Tsubasa", "middle": [], "last": "Ochiai", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Delcroix", "suffix": "" }, { "first": "Tomohiro", "middle": [], "last": "Nakatani", "suffix": "" } ], "year": 2020, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "7009--7013", "other_ids": { "DOI": [ "10.1109/ICASSP40776.2020.9053266" ] }, "num": null, "urls": [], "raw_text": "Keisuke Kinoshita, Tsubasa Ochiai, Marc Delcroix, and Tomohiro Nakatani. 2020. Improving noise robust automatic speech recognition with single-channel time- domain enhancement network. In IEEE International Conference on Acoustics, Speech and Signal Process- ing (ICASSP), pages 7009-7013, Barcelona, Spain.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Forgotten little words: How backchannels and particles may facilitate speech planning in conversation?", "authors": [ { "first": "Birgit", "middle": [], "last": "Knudsen", "suffix": "" }, { "first": "Ava", "middle": [], "last": "Creemers", "suffix": "" }, { "first": "Antje", "middle": [ "S" ], "last": "Meyer", "suffix": "" } ], "year": 2020, "venue": "Frontiers in Psychology", "volume": "11", "issue": "", "pages": "1--10", "other_ids": { "DOI": [ "10.3389/fpsyg.2020.593671" ] }, "num": null, "urls": [], "raw_text": "Birgit Knudsen, Ava Creemers, and Antje S. Meyer. 2020. Forgotten little words: How backchannels and parti- cles may facilitate speech planning in conversation? Frontiers in Psychology, 11:1-10.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Racial disparities in automated speech recognition", "authors": [ { "first": "Allison", "middle": [], "last": "Koenecke", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Nam", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Lake", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Nudell", "suffix": "" }, { "first": "Minnie", "middle": [], "last": "Quartey", "suffix": "" }, { "first": "Zion", "middle": [], "last": "Mengesha", "suffix": "" }, { "first": "Connor", "middle": [], "last": "Toups", "suffix": "" }, { "first": "John", "middle": [ "R" ], "last": "Rickford", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Sharad", "middle": [], "last": "Goel", "suffix": "" } ], "year": 2020, "venue": "Proc. of the National Academy of Sciences", "volume": "117", "issue": "", "pages": "7684--7689", "other_ids": { "DOI": [ "10.1073/pnas.1915768117" ] }, "num": null, "urls": [], "raw_text": "Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R. Rickford, Dan Jurafsky, and Sharad Goel. 2020. Racial disparities in automated speech recog- nition. Proc. of the National Academy of Sciences, 117(14):7684-7689.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "The intersection of sex and social class in the course of linguistic change", "authors": [ { "first": "William", "middle": [], "last": "Labov", "suffix": "" } ], "year": 1990, "venue": "Language Variation and Change", "volume": "2", "issue": "", "pages": "205--254", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Labov. 1990. The intersection of sex and social class in the course of linguistic change. Language Variation and Change, 2:205-254.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "The three dialects of English", "authors": [ { "first": "William", "middle": [], "last": "Labov", "suffix": "" } ], "year": 1991, "venue": "New Ways of Analyzing Sound Change", "volume": "", "issue": "", "pages": "1--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Labov. 1991. The three dialects of English. In New Ways of Analyzing Sound Change, pages 1-44.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Binary codes capable of correcting deletions, insertions and reversals", "authors": [ { "first": "Vladimir", "middle": [], "last": "Levenshtein", "suffix": "" } ], "year": 1966, "venue": "Soviet Physics Doklady", "volume": "10", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady, 10:707.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Towards fast and accurate streaming end-to-end ASR", "authors": [ { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Shuo-Yiin", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Chang", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Sainath", "suffix": "" }, { "first": "Yanzhang", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "He", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Strohman", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2020, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6069--6073", "other_ids": { "DOI": [ "10.1109/ICASSP40776.2020.9054715" ] }, "num": null, "urls": [], "raw_text": "Bo Li, Shuo-yiin Chang, Tara N. Sainath, Ruoming Pang, Yanzhang He, Trevor Strohman, and Yonghui Wu. 2020. Towards fast and accurate streaming end-to-end ASR. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6069-6073, Barcelona, Spain.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Decision tree state clustering with word and syllable features", "authors": [ { "first": "Hank", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Michiel", "middle": [], "last": "Bacchiani", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Siohan", "suffix": "" } ], "year": 2010, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "2958--2961", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hank Liao, Chris Alberti, Michiel Bacchiani, and Olivier Siohan. 2010. Decision tree state clustering with word and syllable features. In Proc. Interspeech 2010, page 2958 -2961, Makuhari, Chiba, Japan.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Large vocabulary automatic speech recognition for children", "authors": [ { "first": "Hank", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Golan", "middle": [], "last": "Pundak", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Siohan", "suffix": "" }, { "first": "Melissa", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Coccaro", "suffix": "" }, { "first": "Qi-Ming", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Senior", "suffix": "" } ], "year": 2015, "venue": "Fran\u00e7oise Beaufays, and Michiel Bacchiani", "volume": "", "issue": "", "pages": "1611--1615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hank Liao, Golan Pundak, Olivier Siohan, Melissa Car- roll, Noah Coccaro, Qi-Ming Jiang, Tara N. Sainath, Andrew Senior, Fran\u00e7oise Beaufays, and Michiel Bacchiani. 2015. Large vocabulary automatic speech recognition for children. In Proc. Interspeech 2015, pages 1611-1615, Dresden, Germany.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Rethinking evaluation in ASR: Are our models robust enough?", "authors": [ { "first": "Tatiana", "middle": [], "last": "Likhomanenko", "suffix": "" }, { "first": "Qiantong", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Vineel", "middle": [], "last": "Pratap", "suffix": "" }, { "first": "Paden", "middle": [], "last": "Tomasello", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Kahn", "suffix": "" }, { "first": "Gilad", "middle": [], "last": "Avidov", "suffix": "" }, { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Synnaeve", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Paden Tomasello, Jacob Kahn, Gilad Avidov, Ronan Collobert, and Gabriel Synnaeve. 2020. Rethinking evaluation in ASR: Are our models robust enough?", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Real-time systems", "authors": [ { "first": "W", "middle": [ "S" ], "last": "Jane", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jane W. S. Liu. 2000. Real-time systems. Prentice Hall, Upper Saddle River, NJ.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Towards understanding ASR error correction for medical conversations", "authors": [ { "first": "Anirudh", "middle": [], "last": "Mani", "suffix": "" }, { "first": "Shruti", "middle": [], "last": "Palaskar", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Konam", "suffix": "" } ], "year": 2020, "venue": "Proc. ACL 2020 Workshop on Natural Language Processing for Medical Conversations", "volume": "", "issue": "", "pages": "7--11", "other_ids": { "DOI": [ "10.18653/v1/2020.nlpmc-1.2" ] }, "num": null, "urls": [], "raw_text": "Anirudh Mani, Shruti Palaskar, and Sandeep Konam. 2020. Towards understanding ASR error correction for medical conversations. In Proc. ACL 2020 Work- shop on Natural Language Processing for Medical Conversations, pages 7-11.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "MLPerf training benchmark", "authors": [ { "first": "Peter", "middle": [], "last": "Mattson", "suffix": "" } ], "year": 2020, "venue": "Proc. Conference on Machine Learning and Systems", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.5281/zenodo.3610717" ] }, "num": null, "urls": [], "raw_text": "Peter Mattson et al. 2020. MLPerf training benchmark. In Proc. Conference on Machine Learning and Systems, pages 1-14, Austin, TX, USA.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "ASR-based dictation practice for second language pronunciation improvement", "authors": [ { "first": "Shannon", "middle": [], "last": "Mccrocklin", "suffix": "" } ], "year": 2019, "venue": "Journal of Second Language Pronunciation", "volume": "5", "issue": "1", "pages": "98--118", "other_ids": { "DOI": [ "10.1075/jslp.16034.mcc" ] }, "num": null, "urls": [], "raw_text": "Shannon McCrocklin. 2019. ASR-based dictation practice for second language pronunciation improve- ment. Journal of Second Language Pronunciation, 5(1):98-118.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Improved robustness to disfluencies in RNN-Transducer based speech recognition", "authors": [ { "first": "Valentin", "middle": [], "last": "Mendelev", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Raissi", "suffix": "" }, { "first": "Guglielmo", "middle": [], "last": "Camporese", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Giollo", "suffix": "" } ], "year": 2021, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin Mendelev, Tina Raissi, Guglielmo Camporese, and Manuel Giollo. 2021. Improved robustness to dis- fluencies in RNN-Transducer based speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, Canada.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Assessing the accuracy of automatic speech recognition for psychotherapy", "authors": [ { "first": "Adam", "middle": [ "S" ], "last": "Miner", "suffix": "" }, { "first": "Albert", "middle": [], "last": "Haque", "suffix": "" }, { "first": "Jason", "middle": [ "Alan" ], "last": "Fries", "suffix": "" }, { "first": "Scott", "middle": [ "L" ], "last": "Fleming", "suffix": "" }, { "first": "Denise", "middle": [ "E" ], "last": "Wilfley", "suffix": "" }, { "first": "G", "middle": [ "Terence" ], "last": "Wilson", "suffix": "" }, { "first": "Arnold", "middle": [], "last": "Milstein", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Bruce", "middle": [ "A" ], "last": "Arnow", "suffix": "" }, { "first": "W", "middle": [ "Stewart" ], "last": "Agras", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "Nigam", "middle": [ "H" ], "last": "Shah", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1038/s41746-020-0285-8" ] }, "num": null, "urls": [], "raw_text": "Adam S. Miner, Albert Haque, Jason Alan Fries, Scott L. Fleming, Denise E. Wilfley, G. Terence Wilson, Arnold Milstein, Dan Jurafsky, Bruce A. Arnow, W. Stewart Agras, Li Fei-Fei, and Nigam H. Shah. 2020. Assess- ing the accuracy of automatic speech recognition for psychotherapy. npj Digital Medicine, 3(82).", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Timnit Gebru, and Jamie Morgenstern. 2020. Diversity and inclusion metrics in subset selection", "authors": [ { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dylan", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Nyalleng", "middle": [], "last": "Moorosi", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Denton", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Hanna", "suffix": "" } ], "year": null, "venue": "Proc", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3375627.3375832" ] }, "num": null, "urls": [], "raw_text": "Margaret Mitchell, Dylan Baker, Nyalleng Moorosi, Emily Denton, Ben Hutchinson, Alex Hanna, Timnit Gebru, and Jamie Morgenstern. 2020. Diversity and inclusion metrics in subset selection. In Proc.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "AAAI/ACM Conference on AI, Ethics, and Society", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "117--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "AAAI/ACM Conference on AI, Ethics, and Society, page 117-123, New York, NY, USA.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Model cards for model reporting", "authors": [ { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zaldivar", "suffix": "" }, { "first": "Parker", "middle": [], "last": "Barnes", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vasserman", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Spitzer", "suffix": "" }, { "first": "Deborah", "middle": [], "last": "Inioluwa", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Raji", "suffix": "" }, { "first": "", "middle": [], "last": "Gebru", "suffix": "" } ], "year": 2019, "venue": "Proc. Conference on Fairness, Accountability, and Transparency", "volume": "", "issue": "", "pages": "220--229", "other_ids": { "DOI": [ "10.1145/3287560.3287596" ] }, "num": null, "urls": [], "raw_text": "Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proc. Con- ference on Fairness, Accountability, and Transparency, page 220-229, Atlanta, GA, USA.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition", "authors": [ { "first": "K", "middle": [], "last": "Patrick", "suffix": "" }, { "first": "Vitaly", "middle": [], "last": "O'neill", "suffix": "" }, { "first": "Somshubra", "middle": [], "last": "Lavrukhin", "suffix": "" }, { "first": "Vahid", "middle": [], "last": "Majumdar", "suffix": "" }, { "first": "Yuekai", "middle": [], "last": "Noroozi", "suffix": "" }, { "first": "Oleksii", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jagadeesh", "middle": [], "last": "Kuchaiev", "suffix": "" }, { "first": "Yuliya", "middle": [], "last": "Balam", "suffix": "" }, { "first": "Keenan", "middle": [], "last": "Dovzhenko", "suffix": "" }, { "first": "Michael", "middle": [ "D" ], "last": "Freyberg", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Shulman", "suffix": "" }, { "first": "Shinji", "middle": [], "last": "Ginsburg", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Watanabe", "suffix": "" }, { "first": "", "middle": [], "last": "Kucsko", "suffix": "" } ], "year": 2021, "venue": "Proc. Interspeech 2021 (submitted)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick K. O'Neill, Vitaly Lavrukhin, Somshubra Majum- dar, Vahid Noroozi, Yuekai Zhang, Oleksii Kuchaiev, Jagadeesh Balam, Yuliya Dovzhenko, Keenan Frey- berg, Michael D. Shulman, Boris Ginsburg, Shinji Watanabe, and Georg Kucsko. 2021. SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition. In Proc. Interspeech 2021 (submitted), Brno, Czech Republic.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Automatic speech recognition performance on a voicemail transcription task", "authors": [ { "first": "Mukund", "middle": [], "last": "Padmanabhan", "suffix": "" }, { "first": "George", "middle": [], "last": "Saon", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Kingsbury", "suffix": "" }, { "first": "Lidia", "middle": [], "last": "Mangu", "suffix": "" } ], "year": 2002, "venue": "IEEE Transactions on Speech and Audio Processing", "volume": "10", "issue": "7", "pages": "433--442", "other_ids": { "DOI": [ "10.1109/TSA.2002.804303" ] }, "num": null, "urls": [], "raw_text": "Mukund Padmanabhan, George Saon, Jing Huang, Brian Kingsbury, and Lidia Mangu. 2002. Automatic speech recognition performance on a voicemail transcription task. IEEE Transactions on Speech and Audio Processing, 10(7):433-442.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Librispeech: An ASR corpus based on public domain audio books", "authors": [ { "first": "Vassil", "middle": [], "last": "Panayotov", "suffix": "" }, { "first": "Guoguo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Povey", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2015, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "5206--5210", "other_ids": { "DOI": [ "10.1109/ICASSP.2015.7178964" ] }, "num": null, "urls": [], "raw_text": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206-5210, South Brisbane, Australia.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Aurora Working Group: DSR Front End LVCSR Evaluation AU/384/02", "authors": [ { "first": "Naveen", "middle": [], "last": "Parihar", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Picone", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Naveen Parihar and Joseph Picone. 2002. Aurora Working Group: DSR Front End LVCSR Evaluation AU/384/02. Technical report, Mississippi State University.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "SpecAugment: A simple data augmentation method for automatic speech recognition", "authors": [ { "first": "Daniel", "middle": [ "S" ], "last": "Park", "suffix": "" }, { "first": "William", "middle": [], "last": "Chan", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "D", "middle": [], "last": "Ekin", "suffix": "" }, { "first": "", "middle": [], "last": "Cubuk", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "2613--2617", "other_ids": { "DOI": [ "10.21437/Interspeech.2019-2680" ] }, "num": null, "urls": [], "raw_text": "Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. 2019. SpecAugment: A simple data augmentation method for automatic speech recognition. In Proc. Interspeech 2019, pages 2613-2617, Graz, Austria.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Designing an online infrastructure for collecting AI data from people with disabilities", "authors": [ { "first": "Joon Sung", "middle": [], "last": "Park", "suffix": "" }, { "first": "Danielle", "middle": [], "last": "Bragg", "suffix": "" }, { "first": "Ece", "middle": [], "last": "Kamar", "suffix": "" }, { "first": "Meredith Ringel", "middle": [], "last": "Morris", "suffix": "" } ], "year": 2021, "venue": "Proc. ACM Conference on Fairness, Accountability, and Transparency", "volume": "", "issue": "", "pages": "52--63", "other_ids": { "DOI": [ "10.1145/3442188.3445870" ] }, "num": null, "urls": [], "raw_text": "Joon Sung Park, Danielle Bragg, Ece Kamar, and Meredith Ringel Morris. 2021. Designing an online infrastructure for collecting AI data from people with disabilities. In Proc. ACM Conference on Fairness, Accountability, and Transparency, page 52-63.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "The design for the Wall Street Journal-based CSR corpus", "authors": [ { "first": "B", "middle": [], "last": "Douglas", "suffix": "" }, { "first": "Janet", "middle": [ "M" ], "last": "Paul", "suffix": "" }, { "first": "", "middle": [], "last": "Baker", "suffix": "" } ], "year": 1992, "venue": "Speech and Natural Language: Proceedings of a Workshop Held at", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas B. Paul and Janet M. Baker. 1992. The design for the Wall Street Journal-based CSR corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "The AU-RORA experimental framework for the performance evaluations of speech recognition systems under noisy condition", "authors": [ { "first": "David", "middle": [], "last": "Pearce", "suffix": "" }, { "first": "Hans-G\u00fcnter", "middle": [], "last": "Hirsch", "suffix": "" } ], "year": 2000, "venue": "Proc. International Conference on Spoken Language Processing", "volume": "", "issue": "", "pages": "29--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Pearce and Hans-G\u00fcnter Hirsch. 2000. The AU- RORA experimental framework for the performance evaluations of speech recognition systems under noisy condition. In Proc. International Conference on Spoken Language Processing, pages 29-32, Beijing, China.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Shrinking Bigfoot: Reducing wav2vec 2.0 footprint", "authors": [ { "first": "Zilun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Akshay", "middle": [], "last": "Budhkar", "suffix": "" }, { "first": "Ilana", "middle": [], "last": "Tuil", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Parinaz", "middle": [], "last": "Sobhani", "suffix": "" }, { "first": "Raphael", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Jumana", "middle": [], "last": "Nassour", "suffix": "" } ], "year": 2021, "venue": "Proc. Interspeech 2021 (submitted)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zilun Peng, Akshay Budhkar, Ilana Tuil, Jason Levy, Parinaz Sobhani, Raphael Cohen, and Jumana Nassour. 2021. Shrinking Bigfoot: Reducing wav2vec 2.0 footprint. In Proc. Interspeech 2021 (submitted), Brno, Czech Republic.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Improving performance of end-to-end ASR on numeric sequences", "authors": [ { "first": "Cal", "middle": [], "last": "Peyser", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Zelin", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "2185--2189", "other_ids": { "DOI": [ "10.21437/Interspeech.2019-1345" ] }, "num": null, "urls": [], "raw_text": "Cal Peyser, Hao Zhang, Tara N. Sainath, and Zelin Wu. 2019. Improving performance of end-to-end ASR on numeric sequences. In Proc. Interspeech 2019, pages 2185-2189, Graz, Austria.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "Rethinking the corpus: Moving towards dynamic linguistic resources", "authors": [ { "first": "Andrew", "middle": [], "last": "Rosenberg", "suffix": "" } ], "year": 2012, "venue": "Proc. Interspeech 2012", "volume": "", "issue": "", "pages": "1392--1395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Rosenberg. 2012. Rethinking the corpus: Mov- ing towards dynamic linguistic resources. In Proc. In- terspeech 2012, pages 1392-1395, Portland, OR, USA.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Sriram Ganapathy, and Mark Liberman. 2021. The third DIHARD diarization challenge", "authors": [ { "first": "Neville", "middle": [], "last": "Ryant", "suffix": "" }, { "first": "Prachi", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Venkat", "middle": [], "last": "Krishnamohan", "suffix": "" }, { "first": "Rajat", "middle": [], "last": "Varma", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Church", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Cieri", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neville Ryant, Prachi Singh, Venkat Krishnamohan, Rajat Varma, Kenneth Church, Christopher Cieri, Jun Du, Sriram Ganapathy, and Mark Liberman. 2021. The third DIHARD diarization challenge.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Emitting word timings with end-to-end models", "authors": [ { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Pang", "suffix": "" }, { "first": "David", "middle": [], "last": "Rybach", "suffix": "" }, { "first": "Basi", "middle": [], "last": "Garc\u00eda", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Strohman", "suffix": "" } ], "year": 2020, "venue": "Proc. Interspeech 2020", "volume": "", "issue": "", "pages": "3615--3619", "other_ids": { "DOI": [ "10.21437/Interspeech.2020-1059" ] }, "num": null, "urls": [], "raw_text": "Tara N. Sainath, Ruoming Pang, David Rybach, Basi Garc\u00eda, and Trevor Strohman. 2020a. Emitting word timings with end-to-end models. In Proc. Interspeech 2020, pages 3615-3619, Shanghai, China.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "A streaming on-device end-to-end model surpassing server-side conventional model quality and latency", "authors": [ { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" } ], "year": null, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "6059--6063", "other_ids": { "DOI": [ "10.1109/ICASSP40776.2020.9054188" ] }, "num": null, "urls": [], "raw_text": "Tara N. Sainath et al. 2020b. A streaming on-device end-to-end model surpassing server-side conventional model quality and latency. In IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 6059-6063, Barcelona, Spain.", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Effect of different sampling rates and feature vector sizes on speech recognition performance", "authors": [ { "first": "Conrad", "middle": [], "last": "Sanderson", "suffix": "" }, { "first": "Kuldip", "middle": [ "K" ], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "IEEE Speech and Image Technologies for Computing and Telecommunications", "volume": "1", "issue": "", "pages": "161--164", "other_ids": { "DOI": [ "10.1109/TENCON.1997.647282" ] }, "num": null, "urls": [], "raw_text": "Conrad Sanderson and Kuldip K. Paliwal. 1997. Ef- fect of different sampling rates and feature vector sizes on speech recognition performance. In IEEE Speech and Image Technologies for Computing and Telecommunications, volume 1, pages 161-164.", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "English conversational telephone speech recognition by humans and machines", "authors": [ { "first": "George", "middle": [], "last": "Saon", "suffix": "" }, { "first": "Gakuto", "middle": [], "last": "Kurata", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Sercu", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Audhkhasi", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Dimitrios", "middle": [], "last": "Dimitriadis", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Bhuvana", "middle": [], "last": "Ramabhadran", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Picheny", "suffix": "" } ], "year": 2017, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "132--136", "other_ids": { "DOI": [ "10.21437/Interspeech.2017-405" ] }, "num": null, "urls": [], "raw_text": "George Saon, Gakuto Kurata, Tom Sercu, Kartik Audhkhasi, Samuel Thomas, Dimitrios Dimitriadis, Xi- aodong Cui, Bhuvana Ramabhadran, Michael Picheny, Lynn-Li Lim, Bergul Roomi, and Phil Hall. 2017. English conversational telephone speech recognition by humans and machines. In Proc. Interspeech 2017, pages 132-136, Stockholm, Sweden.", "links": null }, "BIBREF75": { "ref_id": "b75", "title": "Analyzing the quality and stability of a streaming end-to-end on-device speech recognizer", "authors": [ { "first": "Yuan", "middle": [], "last": "Shangguan", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Knister", "suffix": "" }, { "first": "Yanzhang", "middle": [], "last": "He", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Mcgraw", "suffix": "" }, { "first": "Fran\u00e7oise", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2020, "venue": "Proc. Interspeech 2020", "volume": "", "issue": "", "pages": "591--595", "other_ids": { "DOI": [ "10.21437/Interspeech.2020-1194" ] }, "num": null, "urls": [], "raw_text": "Yuan Shangguan, Kate Knister, Yanzhang He, Ian McGraw, and Fran\u00e7oise Beaufays. 2020. Analyzing the quality and stability of a streaming end-to-end on-device speech recognizer. In Proc. Interspeech 2020, pages 591-595, Shanghai, China.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Dissecting user-perceived latency of on-device E2E speech recognition", "authors": [ { "first": "Yuan", "middle": [], "last": "Shangguan", "suffix": "" }, { "first": "Rohit", "middle": [], "last": "Prabhavalkar", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Mahadeokar", "suffix": "" }, { "first": "Yangyang", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jiatong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Chunyang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Duc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Ozlem", "middle": [], "last": "Kalinli", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Fuegen", "suffix": "" }, { "first": "Michael", "middle": [ "L" ], "last": "Seltzer", "suffix": "" } ], "year": 2021, "venue": "Proc. Interspeech 2021 (submitted)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Shangguan, Rohit Prabhavalkar, Hang Su, Jay Mahadeokar, Yangyang Shi, Jiatong Zhou, Chunyang Wu, Duc Le, Ozlem Kalinli, Christian Fuegen, and Michael L. Seltzer. 2021. Dissecting user-perceived latency of on-device E2E speech recognition. In Proc. Interspeech 2021 (submitted), Brno, Czech Republic.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Personalizing ASR for dysarthric and accented speech with limited data", "authors": [ { "first": "Joel", "middle": [], "last": "Shor", "suffix": "" }, { "first": "Dotan", "middle": [], "last": "Emanuel", "suffix": "" }, { "first": "Oran", "middle": [], "last": "Lang", "suffix": "" }, { "first": "Omry", "middle": [], "last": "Tuval", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Brenner", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Cattiau", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Vieira", "suffix": "" }, { "first": "Maeve", "middle": [], "last": "Mcnally", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Charbonneau", "suffix": "" }, { "first": "Melissa", "middle": [], "last": "Nollstadt", "suffix": "" }, { "first": "Avinatan", "middle": [], "last": "Hassidim", "suffix": "" }, { "first": "Yossi", "middle": [], "last": "Matias", "suffix": "" } ], "year": 2019, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "784--788", "other_ids": { "DOI": [ "10.21437/Interspeech.2019-1427" ] }, "num": null, "urls": [], "raw_text": "Joel Shor, Dotan Emanuel, Oran Lang, Omry Tuval, Michael Brenner, Julie Cattiau, Fernando Vieira, Maeve McNally, Taylor Charbonneau, Melissa Noll- stadt, Avinatan Hassidim, and Yossi Matias. 2019. Personalizing ASR for dysarthric and accented speech with limited data. In Proc. Interspeech 2019, pages 784-788, Graz, Austria.", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "On the effects of speech rate in large vocabulary speech recognition systems", "authors": [ { "first": "A", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Richard", "middle": [ "M" ], "last": "Siegler", "suffix": "" }, { "first": "", "middle": [], "last": "Stern", "suffix": "" } ], "year": 1995, "venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "612--615", "other_ids": { "DOI": [ "10.1109/ICASSP.1995.479672" ] }, "num": null, "urls": [], "raw_text": "Matthew A. Siegler and Richard M. Stern. 1995. On the effects of speech rate in large vocabulary speech recognition systems. In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 612-615, Detroit, MI, USA.", "links": null }, "BIBREF79": { "ref_id": "b79", "title": "Normalization of non-standard words", "authors": [ { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" }, { "first": "Stanley", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Richards", "suffix": "" } ], "year": 2001, "venue": "Computer Speech & Language", "volume": "15", "issue": "3", "pages": "287--333", "other_ids": { "DOI": [ "10.1006/csla.2001.0169" ] }, "num": null, "urls": [], "raw_text": "Richard Sproat, Alan W. Black, Stanley Chen, Shankar Kumar, Mari Ostendorf, and Christopher Richards. 2001. Normalization of non-standard words. Com- puter Speech & Language, 15(3):287-333.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "IGC-parl: Icelandic corpus of parliamentary proceedings", "authors": [ { "first": "Steint\u00f3r", "middle": [], "last": "Steingr\u00edmsson", "suffix": "" }, { "first": "Starkadur", "middle": [], "last": "Barkarson", "suffix": "" }, { "first": "Gunnar", "middle": [], "last": "Thor\u00f6rn\u00f3lfsson", "suffix": "" } ], "year": 2020, "venue": "Proc. ParlaCLARIN Workshop", "volume": "", "issue": "", "pages": "11--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steint\u00f3r Steingr\u00edmsson, Starkadur Barkarson, and Gun- nar Thor\u00d6rn\u00f3lfsson. 2020. IGC-parl: Icelandic corpus of parliamentary proceedings. In Proc. ParlaCLARIN Workshop, pages 11-17, Marseille, France.", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "WER we are and WER we think we are", "authors": [ { "first": "Piotr", "middle": [], "last": "Szyma\u0144ski", "suffix": "" }, { "first": "Mikolaj", "middle": [], "last": "Piotr\u017celasko", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Morzy", "suffix": "" }, { "first": "", "middle": [], "last": "Szymczak", "suffix": "" }, { "first": "Joanna", "middle": [], "last": "Marzena\u017cy\u0142a-Hoppe", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Banaszczak", "suffix": "" }, { "first": "", "middle": [], "last": "Augustyniak", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "3290--3295", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.295" ] }, "num": null, "urls": [], "raw_text": "Piotr Szyma\u0144ski, Piotr\u017belasko, Mikolaj Morzy, Adrian Szymczak, Marzena\u017by\u0142a-Hoppe, Joanna Banaszczak, Lukasz Augustyniak, Jan Mizgajski, and Yishay Carmiel. 2020. WER we are and WER we think we are. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3290-3295.", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Gender and dialect bias in YouTube's automatic captions", "authors": [ { "first": "Rachael", "middle": [], "last": "Tatman", "suffix": "" } ], "year": 2017, "venue": "Proc. ACL Workshop on Ethics in Natural Language Processing", "volume": "", "issue": "", "pages": "53--59", "other_ids": { "DOI": [ "10.18653/v1/W17-1606" ] }, "num": null, "urls": [], "raw_text": "Rachael Tatman. 2017. Gender and dialect bias in YouTube's automatic captions. In Proc. ACL Work- shop on Ethics in Natural Language Processing, pages 53-59, Valencia, Spain.", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Effects of talker dialect, gender & race on accuracy of Bing Speech and YouTube automatic captions", "authors": [ { "first": "Rachael", "middle": [], "last": "Tatman", "suffix": "" }, { "first": "Conner", "middle": [], "last": "Kasten", "suffix": "" } ], "year": 2017, "venue": "Proc. Interspeech", "volume": "", "issue": "", "pages": "934--938", "other_ids": { "DOI": [ "10.21437/Interspeech.2017-1746" ] }, "num": null, "urls": [], "raw_text": "Rachael Tatman and Conner Kasten. 2017. Effects of talker dialect, gender & race on accuracy of Bing Speech and YouTube automatic captions. In Proc. Interspeech 2017, pages 934-938, Stockholm, Sweden.", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Is word error rate a good indicator for spoken language understanding accuracy", "authors": [ { "first": "Ye-Yi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Acero", "suffix": "" }, { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" } ], "year": 2003, "venue": "IEEE Workshop on Automatic Speech Recognition and Understanding", "volume": "", "issue": "", "pages": "577--582", "other_ids": { "DOI": [ "10.1109/ASRU.2003.1318504" ] }, "num": null, "urls": [], "raw_text": "Ye-Yi Wang, Alex Acero, and Ciprian Chelba. 2003. Is word error rate a good indicator for spoken language understanding accuracy. In IEEE Workshop on Auto- matic Speech Recognition and Understanding, pages 577-582, St Thomas, VI, USA.", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Speech accent archive", "authors": [ { "first": "Steven", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Weinberger. 2015. Speech accent archive. George Mason University.", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Handbook of varieties of English: The grammar of urban African American Vernacular English", "authors": [ { "first": "Walt", "middle": [ "Wolfram" ], "last": "", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "111--132", "other_ids": { "DOI": [ "10.1515/9783110197181" ] }, "num": null, "urls": [], "raw_text": "Walt Wolfram. 2004. Handbook of varieties of English: The grammar of urban African American Vernacular English, pages 111-132. Mouton de Gruyter, Berlin, Germany.", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Toward human parity in conversational speech recognition", "authors": [ { "first": "Wayne", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Jasha", "middle": [], "last": "Droppo", "suffix": "" }, { "first": "Xuedong", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Seide", "suffix": "" }, { "first": "Michael", "middle": [ "L" ], "last": "Seltzer", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Zweig", "suffix": "" } ], "year": 2017, "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "volume": "25", "issue": "12", "pages": "2410--2423", "other_ids": { "DOI": [ "10.1109/TASLP.2017.2756440" ] }, "num": null, "urls": [], "raw_text": "Wayne Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Michael L. Seltzer, Andreas Stolcke, Dong Yu, and Geoffrey Zweig. 2017. Toward human parity in conversational speech recognition. IEEE/ACM Trans- actions on Audio, Speech, and Language Processing, 25(12):2410-2423.", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Calibration of confidence measures in speech recognition", "authors": [ { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jinyu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2011, "venue": "IEEE Transactions on Audio, Speech, and Language Processing", "volume": "19", "issue": "8", "pages": "2461--2473", "other_ids": { "DOI": [ "10.1109/TASL.2011.2141988" ] }, "num": null, "urls": [], "raw_text": "Dong Yu, Jinyu Li, and Li Deng. 2011. Calibration of confidence measures in speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 19(8):2461-2473.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "FastEmit: Low-latency streaming ASR with sequence-level emission regularization", "authors": [ { "first": "Jiahui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shuo-Yiin", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Tara", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "Arun", "middle": [], "last": "Yanzhang (ryan) He", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "Anmol", "middle": [], "last": "Han", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Gulati", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Pang", "suffix": "" } ], "year": 2021, "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiahui Yu, Chung-Cheng Chiu, Bo Li, Shuo-yiin Chang, Tara N. Sainath, Yanzhang (Ryan) He, Arun Narayanan, Wei Han, Anmol Gulati, Yonghui Wu, and Ruoming Pang. 2021. FastEmit: Low-latency stream- ing ASR with sequence-level emission regularization. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, Canada.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Pushing the limits of semi-supervised learning for automatic speech recognition", "authors": [ { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "James", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Park", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Chung-Cheng", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Ruoming", "middle": [], "last": "Pang", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2020, "venue": "Proc. NeurIPS Workshop on Self-Supervised Learning for Speech and Audio Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Zhang, James Qin, Daniel S. Park, Wei Han, Chung- Cheng Chiu, Ruoming Pang, Quoc V. Le, and Yonghui Wu. 2020. Pushing the limits of semi-supervised learning for automatic speech recognition. In Proc. NeurIPS Workshop on Self-Supervised Learning for Speech and Audio Processing.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Examples of WER sliced into groups A, B, and C, with the width of the bars reflecting relative sizes of those groups." } } } }