esc-bencher commited on
Commit
4d42041
·
1 Parent(s): a49aa13

Add esc-datasets.py

Browse files
Files changed (1) hide show
  1. esc-datasets.py +1483 -0
esc-datasets.py ADDED
@@ -0,0 +1,1483 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+ """ESC benchmark datasets."""
17
+
18
+ import csv
19
+ from collections import defaultdict
20
+ import os
21
+ import json
22
+ import urllib
23
+ import re
24
+ import logging
25
+
26
+ import soundfile as sf
27
+ import numpy as np
28
+ from tqdm.auto import tqdm
29
+ import requests
30
+ from io import BytesIO
31
+ from pathlib import Path
32
+ from huggingface_hub import HfApi, HfFolder
33
+ import datasets
34
+
35
+ from .cv_release_stats import STATS as _COMMON_VOICE_STATS
36
+
37
+
38
+ _DESCRIPTIONS = {
39
+ "ami": """
40
+ The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings.
41
+ The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
42
+ synchronized to a common timeline. These include close-talking and far-field microphones, individual and
43
+ room-view video cameras, and output from a slide projector and an electronic whiteboard.
44
+ """,
45
+ "spgispeech": """
46
+ The SPGISpeech corpus is derived from company earnings calls manually transcribed by S&P Global, Inc.
47
+ according to a professional style guide detailing conventions for capitalization, punctuation, denormalization
48
+ of non-standard words and tran- scription of disfluencies in spontaneous speech. The basic unit of SPGISpeech is a
49
+ pair consisting of a 5 to 15 second long 16 bit, 16kHz mono wav audio file and its transcription.
50
+ """,
51
+ "voxpopuli": """
52
+ A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
53
+ The raw data is collected from 2009-2020 European Parliament event recordings.
54
+ """,
55
+ "tedlium": """
56
+ The TED-LIUM corpus is English-language TED talks, with transcriptions, sampled at 16kHz.
57
+ All talks and text are property of TED Conferences LLC.
58
+ """,
59
+ "gigaspeech": """
60
+ GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality
61
+ labeled audio suitable for supervised training, and 40,000 hours of total audio suitable for semi-supervised
62
+ and unsupervised training. Around 40,000 hours of transcribed audio is first collected from audiobooks, podcasts
63
+ and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science,
64
+ sports, etc. A new forced alignment and segmentation pipeline is proposed to create sentence segments suitable
65
+ for speech recognition training, and to filter out segments with low-quality transcription. For system training,
66
+ GigaSpeech provides five subsets of different sizes, 10h, 250h, 1000h, 2500h, and 10000h.
67
+ """,
68
+ "librispeech": """
69
+ LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
70
+ prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
71
+ audiobooks from the LibriVox project, and has been carefully segmented and aligned.
72
+ """,
73
+ "common_voice": """
74
+ Common Voice is Mozilla's initiative to help teach machines how real people speak.
75
+ The Common Voice dataset consists of a unique MP3 and corresponding text file.
76
+ """,
77
+ "earnings22": """
78
+ The Earnings 22 dataset ( also referred to as earnings22 ) is a 119-hour corpus of English-language earnings calls
79
+ collected from global companies. The primary purpose is to serve as a benchmark for industrial and academic
80
+ automatic speech recognition (ASR) models on real-world accented speech.
81
+ """
82
+ }
83
+
84
+ _CITATIONS = {
85
+ "ami": """
86
+ @inproceedings{10.1007/11677482_3,
87
+ author = {Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas
88
+ and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and Lathoud, Guillaume
89
+ and Lincoln, Mike and Lisowska, Agnes and McCowan, Iain and Post, Wilfried and Reidsma, Dennis and Wellner, Pierre},
90
+ title = {The AMI Meeting Corpus: A Pre-Announcement},
91
+ year = {2005},
92
+ isbn = {3540325492},
93
+ publisher = {Springer-Verlag},
94
+ address = {Berlin, Heidelberg},
95
+ url = {https://doi.org/10.1007/11677482_3},
96
+ doi = {10.1007/11677482_3},
97
+ booktitle = {Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction},
98
+ pages = {28–39},
99
+ numpages = {12},
100
+ location = {Edinburgh, UK},
101
+ series = {MLMI'05}
102
+ }
103
+ """,
104
+ "spgispeech": """
105
+ @article{2021arXiv210402014O,
106
+ author = {{O'Neill}, Patrick K. and {Lavrukhin}, Vitaly and {Majumdar}, Somshubra and {Noroozi}, Vahid and {Zhang}, Yuekai
107
+ and {Kuchaiev}, Oleksii and {Balam}, Jagadeesh and {Dovzhenko}, Yuliya and {Freyberg}, Keenan and {Shulman}, Michael D.
108
+ and {Ginsburg}, Boris and {Watanabe}, Shinji and {Kucsko}, Georg},
109
+ title = "{SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition}",
110
+ journal = {arXiv e-prints},
111
+ keywords = {Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing},
112
+ year = 2021,
113
+ month = apr,
114
+ eid = {arXiv:2104.02014},
115
+ pages = {arXiv:2104.02014},
116
+ eprint = {2104.02014},
117
+ primaryClass = {cs.CL},
118
+ adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210402014O},
119
+ adsnote = {Provided by the SAO/NASA Astrophysics Data System}
120
+ }
121
+ """,
122
+ "voxpopuli": """
123
+ @inproceedings{wang-etal-2021-voxpopuli,
124
+ title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning,
125
+ Semi-Supervised Learning and Interpretation",
126
+ author = "Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza,
127
+ Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel",
128
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th
129
+ International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
130
+ month = aug,
131
+ year = "2021",
132
+ publisher = "Association for Computational Linguistics",
133
+ url = "https://aclanthology.org/2021.acl-long.80",
134
+ doi = "10.18653/v1/2021.acl-long.80",
135
+ pages = "993--1003",
136
+ }
137
+ """,
138
+ "tedlium": """
139
+ @inproceedings{hernandez2018tedlium3,
140
+ title={TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation},
141
+ author={Hernandez, Fran{\\c{c}}ois and Nguyen, Vincent and Ghannay, Sahar and Tomashenko, Natalia and Est{\\`e}ve, Yannick},
142
+ booktitle={International Conference on Speech and Computer},
143
+ pages={198--208},
144
+ year={2018},
145
+ organization={Springer}
146
+ }
147
+ """,
148
+ "gigaspeech": """
149
+ @article{DBLP:journals/corr/abs-2106-06909,
150
+ author = {Guoguo Chen and Shuzhou Chai and Guanbo Wang and Jiayu Du and Wei{-}Qiang Zhang and Chao Weng and Dan Su
151
+ and Daniel Povey and Jan Trmal and Junbo Zhang and Mingjie Jin and Sanjeev Khudanpur and Shinji Watanabe and
152
+ Shuaijiang Zhao and Wei Zou and Xiangang Li and Xuchen Yao and Yongqing Wang and Yujun Wang and Zhao You and Zhiyong Yan},
153
+ title = {GigaSpeech: An Evolving, Multi-domain {ASR} Corpus with 10, 000 Hours
154
+ of Transcribed Audio},
155
+ journal = {CoRR},
156
+ volume = {abs/2106.06909},
157
+ year = {2021},
158
+ url = {https://arxiv.org/abs/2106.06909},
159
+ eprinttype = {arXiv},
160
+ eprint = {2106.06909},
161
+ timestamp = {Wed, 29 Dec 2021 14:29:26 +0100},
162
+ biburl = {https://dblp.org/rec/journals/corr/abs-2106-06909.bib},
163
+ bibsource = {dblp computer science bibliography, https://dblp.org}
164
+ }
165
+ """,
166
+ "librispeech": """
167
+ @inproceedings{panayotov2015librispeech,
168
+ title={Librispeech: an ASR corpus based on public domain audio books},
169
+ author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
170
+ booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
171
+ pages={5206--5210},
172
+ year={2015},
173
+ organization={IEEE}
174
+ }
175
+ """,
176
+ "common_voice": """
177
+ @inproceedings{commonvoice:2020,
178
+ author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
179
+ title = {Common Voice: A Massively-Multilingual Speech Corpus},
180
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
181
+ pages = {4211--4215},
182
+ year = 2020
183
+ }
184
+ """,
185
+ "earnings22": """
186
+ @misc{https://doi.org/10.48550/arxiv.2203.15591,
187
+ doi = {10.48550/ARXIV.2203.15591},
188
+ url = {https://arxiv.org/abs/2203.15591},
189
+ author = {Del Rio, Miguel and Ha, Peter and McNamara, Quinten and Miller, Corey and Chandra, Shipra},
190
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
191
+ title = {Earnings-22: A Practical Benchmark for Accents in the Wild},
192
+ publisher = {arXiv},
193
+ year = {2022},
194
+ copyright = {Creative Commons Attribution Share Alike 4.0 International}
195
+ }
196
+ """,
197
+ }
198
+
199
+ _HOMEPAGE_URLS = {
200
+ "ami": "https://groups.inf.ed.ac.uk/ami/corpus/",
201
+ "spgispeech": "https://datasets.kensho.com/datasets/spgispeech",
202
+ "voxpopuli": "https://github.com/facebookresearch/voxpopuli",
203
+ "tedlium": "https://www.openslr.org/51/",
204
+ "gigaspeech": "https://github.com/SpeechColab/GigaSpeech",
205
+ "librispeech": "http://www.openslr.org/12",
206
+ "common_voice": "https://commonvoice.mozilla.org/en/datasets",
207
+ "earnings22": "https://github.com/revdotcom/speech-datasets/tree/main/earnings22",
208
+ }
209
+
210
+ _LICENSES = {
211
+ "ami": "CC BY 4.0",
212
+ "spgispeech": "Custom license (academic use only)",
213
+ "voxpopuli": "CC0, also see https://www.europarl.europa.eu/legal-notice/en/",
214
+ "tedlium": "Creative Commons BY-NC-ND 3.0 (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en)",
215
+ "gigaspeech": "Apache License 2.0",
216
+ "librispeech": "CC BY 4.0",
217
+ "common_voice": "Mozilla Public License 2.0 (https://github.com/common-voice/common-voice/blob/main/LICENSE)",
218
+ "earnings22": "CC BY-SA 4.0",
219
+ }
220
+
221
+ _DATASET_TO_CONFIGS = {
222
+ "spgispeech": ["l", "s", "m"],
223
+ "gigaspeech": ["l", "xs", "s", "m", "xl"],
224
+ "librispeech": ["default", "clean.100", "clean.360", "other.500"],
225
+ }
226
+
227
+ _ALL_CONFIGS = list(_DATASET_TO_CONFIGS) + ["earnings22", "ami", "tedlium", "voxpopuli", "common_voice"]
228
+
229
+
230
+ class ESCConfig(datasets.BuilderConfig):
231
+ """BuilderConfig for ESC benchmark dataset. """
232
+
233
+ def __init__(self, name, subconfig, description, citation, homepage, license, **kwargs):
234
+ """
235
+ Args:
236
+ name: `string`, name of a dataset to be downloaded (for example, "gigaspeech")
237
+ subconfig: `string`, specific configuration of a dataset, relevant for "spgispeech", "gigaspeech", and "librispeech"
238
+ description: `string`: dataset decsription
239
+ citation: `string`: dataset citation
240
+ homepage: `string`: dataset homepage
241
+ license: `string`: dataset license
242
+ **kwargs: keyword arguments forwarded to super.
243
+ """
244
+ if name in _DATASET_TO_CONFIGS:
245
+ # first config is the default one
246
+ self.subconfig = _DATASET_TO_CONFIGS[name][0] if subconfig == "default" else subconfig
247
+ else:
248
+ self.subconfig = None
249
+
250
+ super(ESCConfig, self).__init__(
251
+ name=name,
252
+ version=datasets.Version("1.0.0", ""),
253
+ **kwargs
254
+ )
255
+ self.description = description
256
+ self.citation = citation
257
+ self.homepage = homepage
258
+ self.license = license
259
+
260
+
261
+ def _build_config(name, subconfig):
262
+ return ESCConfig(
263
+ name=name,
264
+ subconfig=subconfig,
265
+ description=_DESCRIPTIONS[name],
266
+ citation=_CITATIONS[name],
267
+ homepage=_HOMEPAGE_URLS[name],
268
+ license=_LICENSES[name],
269
+ )
270
+
271
+
272
+ class ESCDatasets(datasets.GeneratorBasedBuilder):
273
+ """ESC benchmark dataset dataset."""
274
+
275
+ DEFAULT_WRITER_BATCH_SIZE = 256
276
+ BUILDER_CONFIGS = [
277
+ _build_config(name, subconfig="default") for name in _ALL_CONFIGS
278
+ ]
279
+
280
+ def _info(self):
281
+ features = datasets.Features(
282
+ {
283
+ "audio": datasets.Audio(sampling_rate=16_000),
284
+ "dataset": datasets.Value("string"),
285
+ "text": datasets.Value("string"),
286
+ "id": datasets.Value("string"),
287
+ }
288
+ )
289
+ return datasets.DatasetInfo( # TODO: add benchmark's own license and description
290
+ features=features,
291
+ description=self.config.description,
292
+ homepage=self.config.homepage,
293
+ license=self.config.license,
294
+ citation=self.config.citation,
295
+ )
296
+
297
+ def _split_generators(self, dl_manager):
298
+ if self.config.name == "ami":
299
+ return self._ami_split_generators(dl_manager)
300
+ elif self.config.name == "spgispeech":
301
+ return self._spgispeech_split_generators(dl_manager)
302
+ elif self.config.name == "voxpopuli":
303
+ return self._voxpopuli_split_generators(dl_manager)
304
+ elif self.config.name == "tedlium":
305
+ return self._tedlium_split_generators(dl_manager)
306
+ elif self.config.name == "gigaspeech":
307
+ return self._gigaspeech_split_generators(dl_manager)
308
+ elif self.config.name == "librispeech":
309
+ return self._librispeech_split_generators(dl_manager)
310
+ elif self.config.name == "common_voice":
311
+ return self._common_voice_split_generators(dl_manager)
312
+ elif self.config.name == "earnings22":
313
+ return self._earnings_split_generators(dl_manager)
314
+
315
+ def _generate_examples(self, *args, **kwargs):
316
+ if self.config.name == "ami":
317
+ yield from self._ami_generate_examples(*args, **kwargs)
318
+ elif self.config.name == "spgispeech":
319
+ yield from self._spgispeech_generate_examples(*args, **kwargs)
320
+ elif self.config.name == "voxpopuli":
321
+ yield from self._voxpopuli_generate_examples(*args, **kwargs)
322
+ elif self.config.name == "tedlium":
323
+ yield from self._tedlium_generate_examples(*args, **kwargs)
324
+ elif self.config.name == "gigaspeech":
325
+ yield from self._gigaspeech_generate_examples(*args, **kwargs)
326
+ elif self.config.name == "librispeech":
327
+ yield from self._librispeech_generate_examples(*args, **kwargs)
328
+ elif self.config.name == "common_voice":
329
+ yield from self._common_voice_generate_examples(*args, **kwargs)
330
+ elif self.config.name == "earnings22":
331
+ yield from self._earnings_generate_examples(*args, **kwargs)
332
+
333
+ def _ami_split_generators(self, dl_manager):
334
+ splits = ["train", "dev", "eval"]
335
+
336
+ audio_archives_urls = {}
337
+ for split in splits:
338
+ audio_archives_urls[split] = [
339
+ _AMI_AUDIO_ARCHIVE_URL.format(split=split, _id=m) for m in _AMI_SAMPLE_IDS[split]
340
+ ]
341
+
342
+ audio_archives = dl_manager.download(audio_archives_urls)
343
+ local_extracted_archives_paths = dl_manager.extract(audio_archives) if not dl_manager.is_streaming else {
344
+ split: [None] * len(audio_archives[split]) for split in splits
345
+ }
346
+
347
+ annotations_urls = {split: _AMI_ANNOTATIONS_ARCHIVE_URL.format(split=split) for split in splits}
348
+ annotations = dl_manager.download(annotations_urls)
349
+
350
+ return [
351
+ datasets.SplitGenerator(
352
+ name=datasets.Split.TRAIN,
353
+ gen_kwargs={
354
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_archives["train"]],
355
+ "local_extracted_archives_paths": local_extracted_archives_paths["train"],
356
+ "annotation": annotations["train"],
357
+ "split": "train"
358
+ },
359
+ ),
360
+ datasets.SplitGenerator(
361
+ name=datasets.Split.VALIDATION,
362
+ gen_kwargs={
363
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_archives["dev"]],
364
+ "local_extracted_archives_paths": local_extracted_archives_paths["dev"],
365
+ "annotation": annotations["dev"],
366
+ "split": "dev"
367
+ },
368
+ ),
369
+ datasets.SplitGenerator(
370
+ name=datasets.Split.TEST,
371
+ gen_kwargs={
372
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_archives["eval"]],
373
+ "local_extracted_archives_paths": local_extracted_archives_paths["eval"],
374
+ "annotation": annotations["eval"],
375
+ "split": "eval"
376
+ },
377
+ ),
378
+ ]
379
+
380
+ def _ami_generate_examples(self, audio_archives, local_extracted_archives_paths, annotation, split):
381
+ assert len(audio_archives) == len(local_extracted_archives_paths)
382
+
383
+ with open(annotation, "r", encoding="utf-8") as f:
384
+ transcriptions = {}
385
+ for line in f.readlines():
386
+ line_items = line.strip().split()
387
+ _id = line_items[0]
388
+ text = " ".join(line_items[1:])
389
+ _, meeting_id, microphone_id, speaker_id, begin_time, end_time = _id.split("_")
390
+ audio_filename = "_".join([split, _id.lower()]) + ".wav"
391
+
392
+ transcriptions[audio_filename] = {
393
+ "id": _id,
394
+ "text": text,
395
+ }
396
+
397
+ features = ["id", "text"]
398
+ for archive, local_archive_path in zip(audio_archives, local_extracted_archives_paths):
399
+ for audio_path, audio_file in archive:
400
+ # audio_path is like 'EN2001a/train_ami_en2001a_h00_mee068_0414915_0415078.wav'
401
+ audio_meta = transcriptions[audio_path.split("/")[-1]]
402
+
403
+ yield audio_path, {
404
+ "audio": {
405
+ "path": os.path.join(local_archive_path, audio_path) if local_archive_path else audio_path,
406
+ "bytes": audio_file.read(),
407
+ },
408
+ "dataset": "ami",
409
+ **{feature: audio_meta[feature] for feature in features}
410
+ }
411
+
412
+ def _spgispeech_split_generators(self, dl_manager):
413
+ subconfig = self.config.subconfig
414
+ subsets = [subconfig] + ["dev", "test"]
415
+
416
+ meta_path = dl_manager.download_and_extract(
417
+ {subset: os.path.join(_SPGISPEECH_META_BASE_URL, _SPGISPEECH_META_FILENAMES[subset]) for subset in subsets}
418
+ )
419
+
420
+ archive_urls = defaultdict(list)
421
+ for subset in subsets:
422
+ for subset_dir in _SPGISPEECH_SUBSET_TO_DIR[subset]:
423
+ for archive_name in _SPGISPEECH_AUDIO_ARCHIVES_NAMES[subset_dir]:
424
+ archive_urls[subset].append(os.path.join(_SPGISPEECH_AUDIO_BASE_URL, subset_dir, archive_name))
425
+
426
+ archive_paths = dl_manager.download(archive_urls)
427
+
428
+ local_extracted_archive_paths = (
429
+ dl_manager.extract(archive_paths)
430
+ if not dl_manager.is_streaming
431
+ else {subset: [None] * len(archive_paths[subset]) for subset in subsets}
432
+ )
433
+
434
+ return [
435
+ datasets.SplitGenerator(
436
+ name=datasets.Split.TRAIN,
437
+ gen_kwargs={
438
+ "local_extracted_archive_paths": local_extracted_archive_paths[subconfig],
439
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths[subconfig]],
440
+ "meta_path": meta_path[subconfig],
441
+ },
442
+ ),
443
+ datasets.SplitGenerator(
444
+ name=datasets.Split.VALIDATION,
445
+ gen_kwargs={
446
+ "local_extracted_archive_paths": local_extracted_archive_paths["dev"],
447
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths["dev"]],
448
+ "meta_path": meta_path["dev"],
449
+ },
450
+ ),
451
+ datasets.SplitGenerator(
452
+ name=datasets.Split.TEST,
453
+ gen_kwargs={
454
+ "local_extracted_archive_paths": local_extracted_archive_paths["test"],
455
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths["test"]],
456
+ "meta_path": meta_path["test"],
457
+ },
458
+ ),
459
+ ]
460
+
461
+ def _spgispeech_generate_examples(self, local_extracted_archive_paths, archives, meta_path):
462
+ # define the expected metadata dict keys,
463
+ # some files have metadata with erroneous entries that we have to filter out
464
+ dict_keys = {"id": "wav_filename", "text": "transcript"}
465
+
466
+ logging.info("Reading spgispeech metadata")
467
+ with open(meta_path, encoding="utf-8") as f:
468
+ csvreader = csv.DictReader(f, delimiter="|")
469
+ metadata = {x["wav_filename"]: dict((k, x[v]) for k, v in dict_keys.items()) for x in tqdm(csvreader, leave=False)}
470
+
471
+ for local_extracted_archive_path, archive in zip(local_extracted_archive_paths, archives):
472
+ # Here we iterate over all the files within the TAR archive:
473
+ for audio_filename, audio_file in archive:
474
+ audio_filename = audio_filename.lstrip("./")
475
+ # if an audio file exists locally (i.e. in default, non-streaming mode) set the full path to it
476
+ # joining path to directory that the archive was extracted to and audio filename.
477
+ path = (
478
+ os.path.join(local_extracted_archive_path, audio_filename)
479
+ if local_extracted_archive_path
480
+ else audio_filename
481
+ )
482
+ # get the .wav filename by removing the directory path from the audio filename
483
+ wav_filename = "/".join(audio_filename.split("/")[-2:])
484
+ example = dict(metadata[wav_filename])
485
+ example["audio"] = {"path": path, "bytes": audio_file.read()}
486
+ example["dataset"] = "spgispeech"
487
+ yield audio_filename, example
488
+
489
+ def _voxpopuli_split_generators(self, dl_manager):
490
+ n_shards_path = dl_manager.download_and_extract(_VOXPOPULI_N_SHARDS_FILE)
491
+ with open(n_shards_path) as f:
492
+ n_shards = json.load(f)["en"] # we use only English language in this benchmark
493
+ splits = ["train", "dev", "test"]
494
+
495
+ audio_urls = {}
496
+ for split in splits:
497
+ audio_urls[split] = [
498
+ _VOXPOPULI_AUDIO_ARCHIVE_PATH.format(split=split, n_shard=i) for i in range(n_shards[split])
499
+ ]
500
+
501
+ meta_urls = {
502
+ split: _VOXPOPULI_METADATA_PATH.format(split=split) for split in splits
503
+ }
504
+
505
+ dl_manager.download_config.num_proc = len(audio_urls["train"])
506
+ meta_paths = dl_manager.download_and_extract(meta_urls)
507
+ audio_paths = dl_manager.download(audio_urls)
508
+
509
+ local_extracted_audio_paths = (
510
+ dl_manager.extract(audio_paths) if not dl_manager.is_streaming else
511
+ {
512
+ split: [None] * len(audio_paths[split]) for split in splits
513
+ }
514
+ )
515
+ return [
516
+ datasets.SplitGenerator(
517
+ name=datasets.Split.TRAIN,
518
+ gen_kwargs={
519
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_paths["train"]],
520
+ "local_extracted_archives_paths": local_extracted_audio_paths["train"],
521
+ "meta_path": meta_paths["train"],
522
+ }
523
+ ),
524
+ datasets.SplitGenerator(
525
+ name=datasets.Split.VALIDATION,
526
+ gen_kwargs={
527
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_paths["dev"]],
528
+ "local_extracted_archives_paths": local_extracted_audio_paths["dev"],
529
+ "meta_path": meta_paths["dev"],
530
+ }
531
+ ),
532
+ datasets.SplitGenerator(
533
+ name=datasets.Split.TEST,
534
+ gen_kwargs={
535
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_paths["test"]],
536
+ "local_extracted_archives_paths": local_extracted_audio_paths["test"],
537
+ "meta_path": meta_paths["test"],
538
+ }
539
+ ),
540
+ ]
541
+
542
+ def _voxpopuli_generate_examples(self, audio_archives, local_extracted_archives_paths, meta_path):
543
+ assert len(audio_archives) == len(local_extracted_archives_paths)
544
+
545
+ logging.info("Reading voxpopuli metadata.")
546
+ with open(meta_path) as f:
547
+ metadata = {x["id"]: x for x in tqdm(csv.DictReader(f, delimiter="\t"), leave=False)}
548
+
549
+ for audio_archive, local_extracted_archive_path in zip(audio_archives, local_extracted_archives_paths):
550
+ for audio_filename, audio_file in audio_archive:
551
+ audio_id = audio_filename.split(os.sep)[-1].split(".wav")[0]
552
+ path = os.path.join(local_extracted_archive_path, audio_filename) if local_extracted_archive_path else audio_filename
553
+
554
+ yield audio_id, {
555
+ "id": audio_id,
556
+ "text": metadata[audio_id]["normalized_text"].lower(),
557
+ "audio": {"path": path, "bytes": audio_file.read()},
558
+ "dataset": "voxpopuli",
559
+ }
560
+
561
+ def _librispeech_split_generators(self, dl_manager):
562
+ dev_splits, test_splits = ["dev.clean", "dev.other"], ["test.clean", "test.other"]
563
+ train_splits = ["train.clean.100", "train.clean.360", "train.other.500"] \
564
+ if self.config.subconfig == "default" else [f"train.{self.config.subconfig}"]
565
+ dl_urls = {config_name: _LIBRISPEECH_DL_URLS[config_name] for config_name in train_splits + dev_splits + test_splits}
566
+ archive_paths = dl_manager.download(dl_urls)
567
+ # (Optional) In non-streaming mode, we can extract the archive locally to have actual local audio files:
568
+ local_extracted_archives = dl_manager.extract(archive_paths) if not dl_manager.is_streaming else {}
569
+ train_split = [
570
+ datasets.SplitGenerator(
571
+ name="train",
572
+ gen_kwargs={
573
+ "local_extracted_archives": [local_extracted_archives.get(train_name) for train_name in train_splits],
574
+ "archives": [dl_manager.iter_archive(archive_paths[train_name]) for train_name in train_splits],
575
+ },
576
+ )
577
+ ]
578
+ dev_splits = [
579
+ datasets.SplitGenerator(
580
+ name="validation.clean",
581
+ gen_kwargs={
582
+ "local_extracted_archives": [local_extracted_archives.get("dev.clean")],
583
+ "archives": [dl_manager.iter_archive(archive_paths["dev.clean"])],
584
+ },
585
+ ),
586
+ datasets.SplitGenerator(
587
+ name="validation.other",
588
+ gen_kwargs={
589
+ "local_extracted_archives": [local_extracted_archives.get("dev.other")],
590
+ "archives": [dl_manager.iter_archive(archive_paths["dev.other"])],
591
+ },
592
+ ),
593
+ ]
594
+ test_splits = [
595
+ datasets.SplitGenerator(
596
+ name="test.clean",
597
+ gen_kwargs={
598
+ "local_extracted_archives": [local_extracted_archives.get("test.clean")],
599
+ "archives": [dl_manager.iter_archive(archive_paths["test.clean"])],
600
+ },
601
+ ),
602
+ datasets.SplitGenerator(
603
+ name="test.other",
604
+ gen_kwargs={
605
+ "local_extracted_archives": [local_extracted_archives.get("test.other")],
606
+ "archives": [dl_manager.iter_archive(archive_paths["test.other"])],
607
+ },
608
+ ),
609
+ ]
610
+ return train_split + dev_splits + test_splits
611
+
612
+ def _librispeech_generate_examples(self, archives, local_extracted_archives):
613
+ key = 0
614
+ audio_data = {}
615
+ transcripts = []
616
+ for archive, local_extracted_archive in zip(archives, local_extracted_archives):
617
+ for path, f in archive:
618
+ if path.endswith(".flac"):
619
+ id_ = path.split("/")[-1][: -len(".flac")]
620
+ audio_data[id_] = f.read()
621
+ elif path.endswith(".trans.txt"):
622
+ for line in f:
623
+ if line:
624
+ line = line.decode("utf-8").strip()
625
+ id_, transcript = line.split(" ", 1)
626
+
627
+ # Error correction
628
+ transcript = transcript.lower()
629
+
630
+ audio_file = f"{id_}.flac"
631
+ audio_file = (
632
+ os.path.join(local_extracted_archive, audio_file)
633
+ if local_extracted_archive
634
+ else audio_file
635
+ )
636
+ transcripts.append(
637
+ {
638
+ "id": id_,
639
+ "file": audio_file,
640
+ "text": transcript,
641
+ }
642
+ )
643
+ if audio_data and len(audio_data) == len(transcripts):
644
+ for transcript in transcripts:
645
+ audio = {"path": transcript["file"], "bytes": audio_data[transcript["id"]]}
646
+ del transcript["file"]
647
+ yield key, {"audio": audio, "dataset": "librispeech", **transcript}
648
+ key += 1
649
+ audio_data = {}
650
+ transcripts = []
651
+
652
+ def _common_voice_get_bundle_url(self, locale, url_template):
653
+ # path = encodeURIComponent(path)
654
+ path = url_template.replace("{locale}", locale)
655
+ path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'")
656
+ # use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024
657
+ # response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json()
658
+ response = requests.get(f"{_COMMON_VOICE_API_URL}/bucket/dataset/{path}", timeout=10.0).json()
659
+ return response["url"]
660
+
661
+ def _common_voice_log_download(self, locale, bundle_version, auth_token):
662
+ if isinstance(auth_token, bool):
663
+ auth_token = HfFolder().get_token()
664
+ whoami = HfApi().whoami(auth_token)
665
+ email = whoami["email"] if "email" in whoami else ""
666
+ payload = {"email": email, "locale": locale, "dataset": bundle_version}
667
+ requests.post(f"{_COMMON_VOICE_API_URL}/{locale}/downloaders", json=payload).json()
668
+
669
+ def _common_voice_split_generators(self, dl_manager):
670
+ """Returns SplitGenerators."""
671
+ hf_auth_token = dl_manager.download_config.use_auth_token
672
+ if hf_auth_token is None:
673
+ raise ConnectionError(
674
+ "Please set use_auth_token=True or use_auth_token='<TOKEN>' to download this dataset"
675
+ )
676
+
677
+ bundle_url_template = _COMMON_VOICE_STATS["bundleURLTemplate"]
678
+ bundle_version = bundle_url_template.split("/")[0]
679
+ dl_manager.download_config.ignore_url_params = True
680
+
681
+ self._common_voice_log_download("en", bundle_version, hf_auth_token)
682
+ archive_path = dl_manager.download(self._common_voice_get_bundle_url("en", bundle_url_template))
683
+ local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else None
684
+
685
+ path_to_data = "/".join([bundle_version, "en"])
686
+ path_to_clips = "/".join([path_to_data, "clips"]) if path_to_data else "clips"
687
+
688
+ return [
689
+ datasets.SplitGenerator(
690
+ name=datasets.Split.TRAIN,
691
+ gen_kwargs={
692
+ "local_extracted_archive": local_extracted_archive,
693
+ "archive_iterator": dl_manager.iter_archive(archive_path),
694
+ "metadata_filepath": "/".join([path_to_data, "train.tsv"]) if path_to_data else "train.tsv",
695
+ "path_to_clips": path_to_clips,
696
+ },
697
+ ),
698
+ datasets.SplitGenerator(
699
+ name=datasets.Split.TEST,
700
+ gen_kwargs={
701
+ "local_extracted_archive": local_extracted_archive,
702
+ "archive_iterator": dl_manager.iter_archive(archive_path),
703
+ "metadata_filepath": "/".join([path_to_data, "test.tsv"]) if path_to_data else "test.tsv",
704
+ "path_to_clips": path_to_clips,
705
+ },
706
+ ),
707
+ datasets.SplitGenerator(
708
+ name=datasets.Split.VALIDATION,
709
+ gen_kwargs={
710
+ "local_extracted_archive": local_extracted_archive,
711
+ "archive_iterator": dl_manager.iter_archive(archive_path),
712
+ "metadata_filepath": "/".join([path_to_data, "dev.tsv"]) if path_to_data else "dev.tsv",
713
+ "path_to_clips": path_to_clips,
714
+ },
715
+ ),
716
+ datasets.SplitGenerator(
717
+ name="other",
718
+ gen_kwargs={
719
+ "local_extracted_archive": local_extracted_archive,
720
+ "archive_iterator": dl_manager.iter_archive(archive_path),
721
+ "metadata_filepath": "/".join([path_to_data, "other.tsv"]) if path_to_data else "other.tsv",
722
+ "path_to_clips": path_to_clips,
723
+ },
724
+ ),
725
+ datasets.SplitGenerator(
726
+ name="invalidated",
727
+ gen_kwargs={
728
+ "local_extracted_archive": local_extracted_archive,
729
+ "archive_iterator": dl_manager.iter_archive(archive_path),
730
+ "metadata_filepath": "/".join([path_to_data, "invalidated.tsv"])
731
+ if path_to_data
732
+ else "invalidated.tsv",
733
+ "path_to_clips": path_to_clips,
734
+ },
735
+ ),
736
+ ]
737
+
738
+ def _common_voice_generate_examples(
739
+ self,
740
+ local_extracted_archive,
741
+ archive_iterator,
742
+ metadata_filepath,
743
+ path_to_clips,
744
+ ):
745
+ """Yields examples."""
746
+ data_fields = list(self._info().features.keys())
747
+ metadata = {}
748
+ metadata_found = False
749
+ for path, f in archive_iterator:
750
+ if path == metadata_filepath:
751
+ metadata_found = True
752
+ lines = (line.decode("utf-8") for line in f)
753
+ reader = csv.DictReader(lines, delimiter="\t", quoting=csv.QUOTE_NONE)
754
+ for row in reader:
755
+ # set absolute path for mp3 audio file
756
+ if not row["path"].endswith(".mp3"):
757
+ row["path"] += ".mp3"
758
+ row["path"] = os.path.join(path_to_clips, row["path"])
759
+ # accent -> accents in CV 8.0
760
+ if "accents" in row:
761
+ row["accent"] = row["accents"]
762
+ del row["accents"]
763
+ # if data is incomplete, fill with empty values
764
+ for field in data_fields:
765
+ if field not in row:
766
+ row[field] = ""
767
+ metadata[row["path"]] = row
768
+ elif path.startswith(path_to_clips):
769
+ assert metadata_found, "Found audio clips before the metadata TSV file."
770
+ if not metadata:
771
+ break
772
+ if path in metadata:
773
+ dict_result = dict(metadata[path])
774
+ # set the audio feature and the path to the extracted file
775
+ path = os.path.join(local_extracted_archive, path) if local_extracted_archive else path
776
+ result = {"id": dict_result["client_id"], "dataset": "common_voice",
777
+ "audio": {"path": path, "bytes": f.read()}}
778
+
779
+ # Error correction
780
+ text = dict_result["sentence"]
781
+ if text.startswith('"') and text.endswith('"'):
782
+ # we can remove trailing quotation marks as they do not affect the transcription
783
+ text = text[1:-1]
784
+ # replace double quotation marks with single
785
+ text = text.replace('""', '"')
786
+ result["text"] = text
787
+
788
+ yield path, result
789
+
790
+ def _tedlium_split_generators(self, dl_manager):
791
+ archive_path = dl_manager.download(_TEDLIUM_URLS)
792
+ # (Optional) In non-streaming mode, we can extract the archive locally to have actual local audio files:
793
+ local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else {}
794
+ split_paths = [
795
+ (datasets.Split.TRAIN, "train"),
796
+ (datasets.Split.VALIDATION, "dev"),
797
+ (datasets.Split.TEST, "test"),
798
+ ]
799
+ splits = []
800
+ for split, split_name in split_paths:
801
+ kwargs = {
802
+ "filepath": [dl_manager.iter_archive(sharded_path) for sharded_path in archive_path[split_name]],
803
+ "local_extracted_archive": local_extracted_archive.get(split_name),
804
+ "split_path": split_name,
805
+ }
806
+ splits.append(datasets.SplitGenerator(name=split, gen_kwargs=kwargs))
807
+ return splits
808
+
809
+ def _tedlium_generate_examples(self, filepath, local_extracted_archive, split_path):
810
+ """Generate examples from a TED-LIUM stm file."""
811
+ if local_extracted_archive:
812
+ for local_archive in local_extracted_archive:
813
+ # The stm directory houses the speaker and transcription information in .stm format
814
+ split_dir = os.path.join(local_archive, split_path)
815
+ stm_files = [os.path.join(split_dir, f) for f in os.listdir(split_dir) if f.endswith(".stm")]
816
+ for file in stm_files:
817
+ # the .sph speaker file almost always has the same file name as the .stm file
818
+ speaker_file = Path(file).stem
819
+ audio_file = os.path.join(split_dir, speaker_file + ".sph")
820
+ segment, sampling_rate = sf.read(audio_file, dtype=np.int16)
821
+ with open(file) as f:
822
+ for line in f:
823
+ line = line.strip()
824
+ fn, channel, speaker, start, end, label, transcript = line.split(" ", 6)
825
+ transcript = _maybe_trim_suffix(transcript)
826
+
827
+ # Error correction
828
+ transcript = transcript.lower()
829
+ if transcript in ignore_segments:
830
+ continue
831
+ # delete the <unk> token from the text
832
+ transcript = transcript.replace("<unk>", "")
833
+ # replace spaced apostrophes with un-spaced (it 's -> it's)
834
+ for contraction in tedlium_contractions:
835
+ transcript = transcript.replace(contraction, contraction[1:])
836
+ # JIWER compliance (for WER/CER calc.)
837
+ # remove multiple spaces
838
+ transcript = re.sub(r"\s\s+", " ", transcript)
839
+ # strip trailing spaces
840
+ transcript = transcript.strip()
841
+ if len(transcript) == 0:
842
+ continue
843
+
844
+ if speaker_file != fn:
845
+ # handle the case where the stm file does not have the same file name as the transcript
846
+ speaker_file = fn
847
+ audio_file = os.path.join(split_dir, speaker_file + ".sph")
848
+ segment, sampling_rate = sf.read(audio_file, dtype=np.int16)
849
+ samples = _extract_audio_segment(segment, sampling_rate, float(start), float(end))
850
+ key = "-".join([speaker, start, end, label])
851
+ example = {
852
+ "audio": {"path": audio_file, "array": samples, "sampling_rate": sampling_rate},
853
+ "text": transcript,
854
+ "id": key,
855
+ "dataset": "tedlium",
856
+ }
857
+ yield key, example
858
+
859
+ else:
860
+ audio_data = {}
861
+ transcripts = defaultdict(list)
862
+ for file in filepath:
863
+ for path, f in file:
864
+ if path.endswith(".sph"):
865
+ # get the speaker id
866
+ fn = path.split("/")[-1].strip(".sph")
867
+ # read the audio data from raw byte form and add key-value pair to dict
868
+ audio_data[fn] = sf.read(BytesIO(f.read()), dtype=np.int16)
869
+ elif path.endswith(".stm"):
870
+ for line in f:
871
+ if line:
872
+ line = line.decode("utf-8").strip()
873
+ fn, channel, speaker, start, end, label, transcript = line.split(" ", 6)
874
+ transcript = _maybe_trim_suffix(transcript)
875
+
876
+ # Error correction
877
+ transcript = transcript.lower()
878
+ if transcript in ignore_segments:
879
+ continue
880
+ # delete the <unk> token from the text
881
+ transcript = transcript.replace("<unk>", "")
882
+ # replace spaced apostrophes with un-spaced (it 's -> it's)
883
+ for contraction in tedlium_contractions:
884
+ transcript = transcript.replace(contraction, contraction[1:])
885
+ # JIWER compliance (for WER/CER calc.)
886
+ # remove multiple spaces
887
+ transcript = re.sub(r"\s\s+", " ", transcript)
888
+ # strip trailing spaces
889
+ transcript = transcript.strip()
890
+ if len(transcript) == 0:
891
+ continue
892
+
893
+ audio_file = path.replace("stm", "sph")
894
+ key = "-".join([speaker, start, end, label])
895
+ # append metadata information to the dict of transcripts for the associated speaker
896
+ transcripts[fn].append(
897
+ {
898
+ "text": transcript,
899
+ "file": audio_file,
900
+ "id": key,
901
+ "start": start,
902
+ "end": end,
903
+ "channel": channel,
904
+ "fn": fn,
905
+ }
906
+ )
907
+
908
+ if audio_data and audio_data.keys() == transcripts.keys():
909
+ for fn, speaker in transcripts.items():
910
+ for transcript in speaker:
911
+ segment, sampling_rate = audio_data[transcript["fn"]]
912
+ samples = _extract_audio_segment(
913
+ segment,
914
+ sampling_rate,
915
+ float(transcript["start"]),
916
+ float(transcript["end"]),
917
+ )
918
+ audio = {"path": transcript["file"], "array": samples,
919
+ "sampling_rate": sampling_rate}
920
+ key = transcript["id"]
921
+ yield key, {
922
+ "audio": audio,
923
+ "text": transcript["text"],
924
+ "dataset": "tedlium",
925
+ "id": transcript["id"],
926
+ }
927
+ audio_data = {}
928
+ transcripts = defaultdict(list)
929
+
930
+ def _gigaspeech_split_generators(self, dl_manager):
931
+ splits_to_configs = {
932
+ "train": _GIGASPEECH_CONFIGS_TO_ALL_CONFIGS[self.config.subconfig],
933
+ "dev": ["dev"],
934
+ "test": ["test"],
935
+ }
936
+
937
+ # 1. prepare sharded archives with audio files
938
+ audio_archives_urls = defaultdict(list)
939
+ for split, subsets in splits_to_configs.items():
940
+ for subset in subsets:
941
+ audio_archives_urls[split].extend(
942
+ [
943
+ _GIGASPEECH_AUDIO_ARCHIVE_URL.format(subset=subset, is_additional=_is_additional(subset),
944
+ archive_id=i)
945
+ for i in range(_GIGASPEECH_N_ARCHIVES[subset])
946
+ ]
947
+ )
948
+ audio_archives_paths = dl_manager.download(audio_archives_urls)
949
+ local_audio_archives_paths = dl_manager.extract(audio_archives_paths) if not dl_manager.is_streaming \
950
+ else {}
951
+
952
+ # 2. prepare sharded metadata csv files
953
+ meta_urls = defaultdict(list)
954
+ for split, subsets in splits_to_configs.items():
955
+ for subset in subsets:
956
+ meta_urls[split].extend(
957
+ [
958
+ _GIGASPEECH_META_URL.format(subset=subset, is_additional=_is_additional(subset), archive_id=i)
959
+ for i in range(_GIGASPEECH_N_ARCHIVES[subset])
960
+ ]
961
+ )
962
+ meta_paths = dl_manager.download_and_extract(meta_urls)
963
+
964
+ return [
965
+ datasets.SplitGenerator(
966
+ name=datasets.Split.TRAIN,
967
+ gen_kwargs={
968
+ "audio_archives_iterators": [
969
+ dl_manager.iter_archive(archive_path) for archive_path in audio_archives_paths["train"]
970
+ ],
971
+ "local_audio_archives_paths": local_audio_archives_paths.get("train"),
972
+ "meta_paths": meta_paths["train"]
973
+ },
974
+ ),
975
+ datasets.SplitGenerator(
976
+ name=datasets.Split.VALIDATION,
977
+ gen_kwargs={
978
+ "audio_archives_iterators": [
979
+ dl_manager.iter_archive(archive_path) for archive_path in audio_archives_paths["dev"]
980
+ ],
981
+ "local_audio_archives_paths": local_audio_archives_paths.get("dev"),
982
+ "meta_paths": meta_paths["dev"]
983
+ },
984
+ ),
985
+ datasets.SplitGenerator(
986
+ name=datasets.Split.TEST,
987
+ gen_kwargs={
988
+ "audio_archives_iterators": [
989
+ dl_manager.iter_archive(archive_path) for archive_path in audio_archives_paths["test"]
990
+ ],
991
+ "local_audio_archives_paths": local_audio_archives_paths.get("test"),
992
+ "meta_paths": meta_paths["test"]
993
+ },
994
+ ),
995
+ ]
996
+
997
+ def _gigaspeech_generate_examples(self, audio_archives_iterators, local_audio_archives_paths, meta_paths):
998
+ assert len(audio_archives_iterators) == len(meta_paths)
999
+ if local_audio_archives_paths:
1000
+ assert len(audio_archives_iterators) == len(local_audio_archives_paths)
1001
+
1002
+ for i, (meta_path, audio_archive_iterator) in enumerate(zip(meta_paths, audio_archives_iterators)):
1003
+ meta_dict = dict()
1004
+ with open(meta_path) as csvfile:
1005
+ meta_csv = csv.DictReader(csvfile)
1006
+ for line in meta_csv:
1007
+ meta_dict[line["sid"]] = line
1008
+
1009
+ for audio_path_in_archive, audio_file in audio_archive_iterator:
1010
+ # `audio_path_in_archive` is like "dev_chunks_0000/YOU1000000029_S0000095.wav"
1011
+ audio_filename = os.path.split(audio_path_in_archive)[1]
1012
+ audio_id = audio_filename.split(".wav")[0]
1013
+ audio_meta = meta_dict[audio_id]
1014
+ audio_meta["id"] = audio_meta.pop("sid")
1015
+ text = audio_meta.pop("text_tn")
1016
+
1017
+ # Error correction
1018
+ text = text.lower()
1019
+ if text in ignore_segments:
1020
+ continue
1021
+ for junk_token in gigaspeech_junk_tokens:
1022
+ text = text.replace(junk_token, "")
1023
+ # convert spelled out punctuation to symbolic form
1024
+ for punctuation, replacement in gigaspeech_punctuation.items():
1025
+ text = text.replace(punctuation, replacement)
1026
+ # JIWER compliance (for WER/CER calc.)
1027
+ # remove multiple spaces
1028
+ text = re.sub(r"\s\s+", " ", text)
1029
+ # strip trailing spaces
1030
+ text = text.strip()
1031
+ if len(text) == 0:
1032
+ continue
1033
+
1034
+ audio_meta["text"] = text
1035
+
1036
+ path = os.path.join(local_audio_archives_paths[i], audio_path_in_archive) if local_audio_archives_paths \
1037
+ else audio_path_in_archive
1038
+
1039
+ yield audio_id, {
1040
+ "audio": {"path": path, "bytes": audio_file.read()},
1041
+ "dataset": "gigaspeech",
1042
+ **{feature: value for feature, value in audio_meta.items() if feature in self.info.features}
1043
+ }
1044
+
1045
+ def _earnings_split_generators(self, dl_manager):
1046
+ meta_url = _EARNINGS_BASE_URL + "metadata.csv"
1047
+ meta_path = dl_manager.download_and_extract(meta_url)
1048
+
1049
+ with open(meta_path, encoding="utf-8") as f:
1050
+ csvreader = csv.DictReader(f, delimiter=",")
1051
+ metadata, all_ids = {}, set()
1052
+ for row in csvreader:
1053
+ all_ids.update([row["source_id"]])
1054
+ metadata[row["file"]] = row["sentence"] # we need only text in this benchmark
1055
+
1056
+ train_ids = all_ids - _EARNINGS_DEV_IDS - _EARNINGS_TEST_IDS
1057
+ split_to_ids = {"train": train_ids, "dev": _EARNINGS_DEV_IDS, "test": _EARNINGS_TEST_IDS}
1058
+
1059
+ dl_urls = {}
1060
+ for split, split_ids in split_to_ids.items():
1061
+ dl_urls[split] = [_EARNINGS_BASE_URL + f"data/{source_id}.tar.gz" for source_id in split_ids]
1062
+ archive_paths = dl_manager.download(dl_urls)
1063
+
1064
+ local_extracted_archive_paths = (
1065
+ dl_manager.extract(archive_paths)
1066
+ if not dl_manager.is_streaming
1067
+ else {split: [None] * len(archive_paths[split]) for split in ["train", "dev", "test"]}
1068
+ )
1069
+
1070
+ return [
1071
+ datasets.SplitGenerator(
1072
+ name=datasets.Split.TRAIN,
1073
+ gen_kwargs={
1074
+ "local_extracted_archive_paths": local_extracted_archive_paths["train"],
1075
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths["train"]],
1076
+ "metadata": metadata,
1077
+ },
1078
+ ),
1079
+ datasets.SplitGenerator(
1080
+ name=datasets.Split.VALIDATION,
1081
+ gen_kwargs={
1082
+ "local_extracted_archive_paths": local_extracted_archive_paths["dev"],
1083
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths["dev"]],
1084
+ "metadata": metadata,
1085
+ },
1086
+ ),
1087
+ datasets.SplitGenerator(
1088
+ name=datasets.Split.TEST,
1089
+ gen_kwargs={
1090
+ "local_extracted_archive_paths": local_extracted_archive_paths["test"],
1091
+ "archives": [dl_manager.iter_archive(path) for path in archive_paths["test"]],
1092
+ "metadata": metadata,
1093
+ },
1094
+ ),
1095
+ ]
1096
+
1097
+ def _earnings_generate_examples(self, local_extracted_archive_paths, archives, metadata):
1098
+ for local_extracted_archive_path, archive in zip(local_extracted_archive_paths, archives):
1099
+ # Here we iterate over all the files within the TAR archive:
1100
+ for audio_filename, audio_file in archive:
1101
+ audio_filename = audio_filename.lstrip("./")
1102
+ # if an audio file exists locally (i.e. in default, non-streaming mode) set the full path to it
1103
+ # joining path to directory that the archive was extracted to and audio filename.
1104
+ path = (
1105
+ os.path.join(local_extracted_archive_path, audio_filename)
1106
+ if local_extracted_archive_path
1107
+ else audio_filename
1108
+ )
1109
+
1110
+ # Error correction
1111
+ text = metadata[audio_filename]
1112
+ if text.lower() in ignore_segments:
1113
+ continue
1114
+ # Remove junk tokens
1115
+ for junk_token in earnings_junk_tokens:
1116
+ text = text.replace(junk_token, "")
1117
+ # JIWER compliance (for WER/CER calc.)
1118
+ # remove multiple spaces
1119
+ text = re.sub(r"\s\s+", " ", text)
1120
+ # strip trailing spaces
1121
+ text = text.strip()
1122
+ if len(text) == 0:
1123
+ continue
1124
+
1125
+ yield audio_filename, {
1126
+ "id": audio_filename,
1127
+ "text": text,
1128
+ "dataset": "earnings22",
1129
+ "audio": {"path": path, "bytes": audio_file.read()}
1130
+ }
1131
+
1132
+
1133
+ def _maybe_trim_suffix(transcript):
1134
+ # stm files for the TEDLIUM release 1 train split contain a key (enclosed in
1135
+ # parens) at the end.
1136
+ splits = transcript.rsplit(" ", 1)
1137
+ transcript = splits[0]
1138
+ if len(splits) > 1:
1139
+ suffix = splits[-1]
1140
+ if not suffix.startswith("("):
1141
+ transcript += " " + suffix
1142
+ return transcript
1143
+
1144
+
1145
+ def _extract_audio_segment(segment, sampling_rate, start_sec, end_sec):
1146
+ """Extracts segment of audio samples (as an ndarray) from the given segment."""
1147
+ # The dataset only contains mono audio.
1148
+ start_sample = int(start_sec * sampling_rate)
1149
+ end_sample = min(int(end_sec * sampling_rate), segment.shape[0])
1150
+ samples = segment[start_sample:end_sample]
1151
+ return samples
1152
+
1153
+
1154
+ def _parse_gender(label_str):
1155
+ """Parse gender string from STM "<label>" field."""
1156
+ gender = re.split(",|_", label_str)[-1][:-1]
1157
+ # Fix inconsistencies in the data.
1158
+ if not gender:
1159
+ gender = -1 # Missing label.
1160
+ elif gender == "<NA": # In TEDLIUM release 3 training data.
1161
+ gender = -1 # Missing label.
1162
+ elif gender == "F":
1163
+ gender = "female"
1164
+ elif gender == "M":
1165
+ gender = "male"
1166
+ return gender
1167
+
1168
+
1169
+ def _is_additional(name):
1170
+ if name in {"s", "m", "l", "xl"}:
1171
+ return "_additional"
1172
+ return ""
1173
+
1174
+
1175
+ _AMI_TRAIN_SAMPLE_IDS = [
1176
+ "EN2001a",
1177
+ "EN2001b",
1178
+ "EN2001d",
1179
+ "EN2001e",
1180
+ "EN2003a",
1181
+ "EN2004a",
1182
+ "EN2005a",
1183
+ "EN2006a",
1184
+ "EN2006b",
1185
+ "EN2009b",
1186
+ "EN2009c",
1187
+ "EN2009d",
1188
+ "ES2002a",
1189
+ "ES2002b",
1190
+ "ES2002c",
1191
+ "ES2002d",
1192
+ "ES2003a",
1193
+ "ES2003b",
1194
+ "ES2003c",
1195
+ "ES2003d",
1196
+ "ES2005a",
1197
+ "ES2005b",
1198
+ "ES2005c",
1199
+ "ES2005d",
1200
+ "ES2006a",
1201
+ "ES2006b",
1202
+ "ES2006c",
1203
+ "ES2006d",
1204
+ "ES2007a",
1205
+ "ES2007b",
1206
+ "ES2007c",
1207
+ "ES2007d",
1208
+ "ES2008a",
1209
+ "ES2008b",
1210
+ "ES2008c",
1211
+ "ES2008d",
1212
+ "ES2009a",
1213
+ "ES2009b",
1214
+ "ES2009c",
1215
+ "ES2009d",
1216
+ "ES2010a",
1217
+ "ES2010b",
1218
+ "ES2010c",
1219
+ "ES2010d",
1220
+ "ES2012a",
1221
+ "ES2012b",
1222
+ "ES2012c",
1223
+ "ES2012d",
1224
+ "ES2013a",
1225
+ "ES2013b",
1226
+ "ES2013c",
1227
+ "ES2013d",
1228
+ "ES2014a",
1229
+ "ES2014b",
1230
+ "ES2014c",
1231
+ "ES2014d",
1232
+ "ES2015a",
1233
+ "ES2015b",
1234
+ "ES2015c",
1235
+ "ES2015d",
1236
+ "ES2016a",
1237
+ "ES2016b",
1238
+ "ES2016c",
1239
+ "ES2016d",
1240
+ "IB4005",
1241
+ "IN1001",
1242
+ "IN1002",
1243
+ "IN1005",
1244
+ "IN1007",
1245
+ "IN1008",
1246
+ "IN1009",
1247
+ "IN1012",
1248
+ "IN1013",
1249
+ "IN1014",
1250
+ "IN1016",
1251
+ "IS1000a",
1252
+ "IS1000b",
1253
+ "IS1000c",
1254
+ "IS1000d",
1255
+ "IS1001a",
1256
+ "IS1001b",
1257
+ "IS1001c",
1258
+ "IS1001d",
1259
+ "IS1002b",
1260
+ "IS1002c",
1261
+ "IS1002d",
1262
+ "IS1003a",
1263
+ "IS1003b",
1264
+ "IS1003c",
1265
+ "IS1003d",
1266
+ "IS1004a",
1267
+ "IS1004b",
1268
+ "IS1004c",
1269
+ "IS1004d",
1270
+ "IS1005a",
1271
+ "IS1005b",
1272
+ "IS1005c",
1273
+ "IS1006a",
1274
+ "IS1006b",
1275
+ "IS1006c",
1276
+ "IS1006d",
1277
+ "IS1007a",
1278
+ "IS1007b",
1279
+ "IS1007c",
1280
+ "IS1007d",
1281
+ "TS3005a",
1282
+ "TS3005b",
1283
+ "TS3005c",
1284
+ "TS3005d",
1285
+ "TS3006a",
1286
+ "TS3006b",
1287
+ "TS3006c",
1288
+ "TS3006d",
1289
+ "TS3007a",
1290
+ "TS3007b",
1291
+ "TS3007c",
1292
+ "TS3007d",
1293
+ "TS3008a",
1294
+ "TS3008b",
1295
+ "TS3008c",
1296
+ "TS3008d",
1297
+ "TS3009a",
1298
+ "TS3009b",
1299
+ "TS3009c",
1300
+ "TS3009d",
1301
+ "TS3010a",
1302
+ "TS3010b",
1303
+ "TS3010c",
1304
+ "TS3010d",
1305
+ "TS3011a",
1306
+ "TS3011b",
1307
+ "TS3011c",
1308
+ "TS3011d",
1309
+ "TS3012a",
1310
+ "TS3012b",
1311
+ "TS3012c",
1312
+ "TS3012d",
1313
+ ]
1314
+
1315
+ _AMI_VALIDATION_SAMPLE_IDS = [
1316
+ "ES2011a",
1317
+ "ES2011c",
1318
+ "IB4001",
1319
+ "IB4003",
1320
+ "IB4010",
1321
+ "IS1008a",
1322
+ "IS1008c",
1323
+ "TS3004a",
1324
+ "TS3004c",
1325
+ "ES2011b",
1326
+ "ES2011d",
1327
+ "IB4002",
1328
+ "IB4004",
1329
+ "IB4011",
1330
+ "IS1008b",
1331
+ "IS1008d",
1332
+ "TS3004b",
1333
+ "TS3004d",
1334
+ ]
1335
+
1336
+ _AMI_EVAL_SAMPLE_IDS = [
1337
+ "EN2002a",
1338
+ "EN2002b",
1339
+ "EN2002c",
1340
+ "EN2002d",
1341
+ "ES2004a",
1342
+ "ES2004b",
1343
+ "ES2004c",
1344
+ "ES2004d",
1345
+ "IS1009a",
1346
+ "IS1009b",
1347
+ "IS1009c",
1348
+ "IS1009d",
1349
+ "TS3003a",
1350
+ "TS3003b",
1351
+ "TS3003c",
1352
+ "TS3003d",
1353
+ ]
1354
+
1355
+ _AMI_SAMPLE_IDS = {
1356
+ "train": _AMI_TRAIN_SAMPLE_IDS,
1357
+ "dev": _AMI_VALIDATION_SAMPLE_IDS,
1358
+ "eval": _AMI_EVAL_SAMPLE_IDS,
1359
+ }
1360
+
1361
+ _AMI_BASE_DATA_URL = "https://huggingface.co/datasets/speech-seq2seq/ami/resolve/main/"
1362
+
1363
+ _AMI_AUDIO_ARCHIVE_URL = _AMI_BASE_DATA_URL + "audio/ihm/{split}/{_id}.tar.gz"
1364
+
1365
+ _AMI_ANNOTATIONS_ARCHIVE_URL = _AMI_BASE_DATA_URL + "annotations/{split}/text"
1366
+
1367
+ _SPGISPEECH_BASE_URL = "https://huggingface.co/datasets/kensho/spgispeech/resolve/main/data/"
1368
+
1369
+ _SPGISPEECH_AUDIO_BASE_URL = _SPGISPEECH_BASE_URL + "/audio"
1370
+
1371
+ _SPGISPEECH_SUBSET_TO_DIR = {
1372
+ "s": ["s"],
1373
+ "m": ["s", "m_additional"],
1374
+ "l": ["s", "m_additional", "l_additional"],
1375
+ "dev": ["dev"],
1376
+ "test": ["test"],
1377
+ }
1378
+
1379
+ # the second number in range is the number of archives (shards) in a subset
1380
+ _SPGISPEECH_AUDIO_ARCHIVES_NAMES = {
1381
+ "s": [f"s_part_{i}.tar.gz" for i in range(0, 6)],
1382
+ "m_additional": [f"m_part_{i}.tar.gz" for i in range(0, 21)],
1383
+ "l_additional": [f"l_part_{i}.tar.gz" for i in range(0, 103)],
1384
+ "dev": [f"dev_part_{i}.tar.gz" for i in range(0, 3)],
1385
+ "test": [f"test_part_{i}.tar.gz" for i in range(0, 3)],
1386
+ }
1387
+
1388
+ _SPGISPEECH_META_BASE_URL = _SPGISPEECH_BASE_URL + "/meta"
1389
+
1390
+ _SPGISPEECH_META_FILENAMES = {
1391
+ "s": "train_small.csv",
1392
+ "m": "train_medium.csv",
1393
+ "l": "train.csv",
1394
+ "dev": "dev.csv",
1395
+ "test": "test.csv",
1396
+ }
1397
+
1398
+ _VOXPOPULI_BASE_DATA_DIR = "https://huggingface.co/datasets/polinaeterna/voxpopuli/resolve/main/data/"
1399
+
1400
+ _VOXPOPULI_N_SHARDS_FILE = _VOXPOPULI_BASE_DATA_DIR + "n_files.json"
1401
+
1402
+ _VOXPOPULI_AUDIO_ARCHIVE_PATH = _VOXPOPULI_BASE_DATA_DIR + "en/{split}/{split}_part_{n_shard}.tar.gz"
1403
+
1404
+ _VOXPOPULI_METADATA_PATH = _VOXPOPULI_BASE_DATA_DIR + "en/asr_{split}.tsv"
1405
+
1406
+ _LIBRISPEECH_DL_URL = "http://www.openslr.org/resources/12/"
1407
+
1408
+ _LIBRISPEECH_DL_URLS = {
1409
+ "dev.clean": _LIBRISPEECH_DL_URL + "dev-clean.tar.gz",
1410
+ "dev.other": _LIBRISPEECH_DL_URL + "dev-other.tar.gz",
1411
+ "test.clean": _LIBRISPEECH_DL_URL + "test-clean.tar.gz",
1412
+ "test.other": _LIBRISPEECH_DL_URL + "test-other.tar.gz",
1413
+ "train.clean.100": _LIBRISPEECH_DL_URL + "train-clean-100.tar.gz",
1414
+ "train.clean.360": _LIBRISPEECH_DL_URL + "train-clean-360.tar.gz",
1415
+ "train.other.500": _LIBRISPEECH_DL_URL + "train-other-500.tar.gz",
1416
+ }
1417
+
1418
+ _COMMON_VOICE_API_URL = "https://commonvoice.mozilla.org/api/v1"
1419
+
1420
+ _TEDLIUM_BASE_URL = "https://huggingface.co/datasets/LIUM/tedlium/resolve/main/TEDLIUM_release3/legacy/"
1421
+
1422
+ _TEDLIUM_URLS = {
1423
+ "train": [_TEDLIUM_BASE_URL + "train_1.tar.gz", _TEDLIUM_BASE_URL + "train_2.tar.gz"],
1424
+ "dev": [_TEDLIUM_BASE_URL + "dev.tar.gz"],
1425
+ "test": [_TEDLIUM_BASE_URL + "test.tar.gz"],
1426
+ }
1427
+
1428
+ _GIGASPEECH_BASE_DATA_URL = "https://huggingface.co/datasets/speechcolab/gigaspeech/resolve/main/data/"
1429
+
1430
+ _GIGASPEECH_AUDIO_ARCHIVE_URL = _GIGASPEECH_BASE_DATA_URL + "audio/{subset}_files{is_additional}/{subset}_chunks_{archive_id:04}.tar.gz"
1431
+
1432
+ _GIGASPEECH_META_URL = _GIGASPEECH_BASE_DATA_URL + "metadata/{subset}_metadata{is_additional}/{subset}_chunks_{archive_id:04}_metadata.csv"
1433
+
1434
+ _GIGASPEECH_CONFIGS_TO_ALL_CONFIGS = {
1435
+ "xs": ["xs"],
1436
+ "s": ["xs", "s"],
1437
+ "m": ["xs", "s", "m"],
1438
+ "l": ["xs", "s", "m", "l"],
1439
+ "xl": ["xs", "s", "m", "l", "xl"],
1440
+ }
1441
+
1442
+ _GIGASPEECH_N_ARCHIVES = {
1443
+ "xs": 1,
1444
+ "s": 23,
1445
+ "m": 69,
1446
+ "l": 136,
1447
+ "xl": 602,
1448
+ "dev": 1,
1449
+ "test": 3,
1450
+ }
1451
+
1452
+ _EARNINGS_BASE_URL = "https://huggingface.co/datasets/anton-l/earnings22_baseline_5_gram/resolve/main/"
1453
+
1454
+ _EARNINGS_DEV_IDS = {
1455
+ "4420696",
1456
+ "4448760",
1457
+ "4461799",
1458
+ "4469836",
1459
+ "4473238",
1460
+ "4482110",
1461
+ }
1462
+ _EARNINGS_TEST_IDS = {
1463
+ "4432298",
1464
+ "4450488",
1465
+ "4470290",
1466
+ "4479741",
1467
+ "4483338",
1468
+ "4485244",
1469
+ }
1470
+
1471
+
1472
+ tedlium_contractions = [" 's", " 't", " 're", " 've", " 'm", " 'll", " 'd", " 'clock", " 'all"]
1473
+ gigaspeech_punctuation = {" <comma>": ",", " <period>": ".", " <questionmark>": "?", " <exclamationpoint>": "!"}
1474
+ gigaspeech_junk_tokens = ["<other>", "<sil>"]
1475
+ swb_junk_tokens = ["[noise]", "[laughter]", "[silence]", "[vocalized-noise]", "<a_aside>", "<b_aside>", "<e_aside>",
1476
+ "[laughter-", "_1", "[laugh]", "[sigh]", "[cough]", "[mn]", "[breath]", "[lipsmack]",
1477
+ "[sneeze]", "[skip]", "[pause]", "(%hesitation)", "(%HESITATION)"]
1478
+ swb_punctuations = ["{", "}", "[", "]-", "]", "((", "))", "(", ")", "."]
1479
+ swb_fillers = r"\b(uh|uhm|um|hmm|mm|mhm|mmm)\b"
1480
+ earnings_junk_tokens = ["<noise>", "<crosstalk>", "<affirmative>", "<inaudible>", "inaudible", "<laugh>", "<silence>"]
1481
+ ignore_segments = ["ignore_time_segment_in_scoring", "<noise>", "<music>", "[noise]", "[laughter]", "[silence]",
1482
+ "[vocalized-noise]", "<crosstalk>", "<affirmative>", "<inaudible>", "<laugh>", ""]
1483
+ ignore_segments = ignore_segments + gigaspeech_junk_tokens + swb_junk_tokens + earnings_junk_tokens