Datasets:
Improve dataset card: Add metadata, paper/project links, results, and sample usage (#2)
Browse files- Improve dataset card: Add metadata, paper/project links, results, and sample usage (de513dcff165c32101f57d6d80c36d7fdc5751a0)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
@@ -1,38 +1,37 @@
|
|
1 |
---
|
2 |
configs:
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
dataset_info:
|
28 |
splits:
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
featrues:
|
37 |
- dtype: string
|
38 |
name: aya_name
|
@@ -85,11 +84,136 @@ dataset_info:
|
|
85 |
- null
|
86 |
- 1
|
87 |
name: labels
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
---
|
89 |
-
# Recitation Segmentations Dataset
|
90 |
|
91 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
* adding augmentation to the speed of the recitations utterance with column `speed` reflects the speed from 0.8 to 1.5 on 40% of the dataset using [audumentations](https://iver56.github.io/audiomentations/).
|
93 |
* adding data augmentation with [audiomentations](https://iver56.github.io/audiomentations/) on 40% of the dataset to prepare it for training the recitations spliter.
|
94 |
|
95 |
The codes for building this dataset is available at [github](https://github.com/obadx/recitations-segmenter)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
configs:
|
3 |
+
- config_name: default
|
4 |
+
data_files:
|
5 |
+
- split: train
|
6 |
+
path:
|
7 |
+
- data/recitation_0/train/*.parquet
|
8 |
+
- data/recitation_1/train/*.parquet
|
9 |
+
- data/recitation_2/train/*.parquet
|
10 |
+
- data/recitation_3/train/*.parquet
|
11 |
+
- data/recitation_5/train/*.parquet
|
12 |
+
- data/recitation_6/train/*.parquet
|
13 |
+
- data/recitation_7/train/*.parquet
|
14 |
+
- split: validation
|
15 |
+
path:
|
16 |
+
- data/recitation_0/validation/*.parquet
|
17 |
+
- data/recitation_1/validation/*.parquet
|
18 |
+
- data/recitation_2/validation/*.parquet
|
19 |
+
- data/recitation_3/validation/*.parquet
|
20 |
+
- data/recitation_5/validation/*.parquet
|
21 |
+
- data/recitation_6/validation/*.parquet
|
22 |
+
- data/recitation_7/validation/*.parquet
|
23 |
+
- split: test
|
24 |
+
path:
|
25 |
+
- data/recitation_8/train/*.parquet
|
26 |
+
- data/recitation_8/validation/*.parquet
|
27 |
dataset_info:
|
28 |
splits:
|
29 |
+
- name: train
|
30 |
+
num_examples: 54823
|
31 |
+
- name: test
|
32 |
+
num_examples: 8787
|
33 |
+
- name: validation
|
34 |
+
num_examples: 7175
|
|
|
35 |
featrues:
|
36 |
- dtype: string
|
37 |
name: aya_name
|
|
|
84 |
- null
|
85 |
- 1
|
86 |
name: labels
|
87 |
+
language:
|
88 |
+
- ar
|
89 |
+
license: cc-by-nc-4.0
|
90 |
+
task_categories:
|
91 |
+
- automatic-speech-recognition
|
92 |
+
tags:
|
93 |
+
- quran
|
94 |
+
- arabic
|
95 |
+
- speech-segmentation
|
96 |
+
- audio-segmentation
|
97 |
+
- audio
|
98 |
---
|
|
|
99 |
|
100 |
+
# Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning
|
101 |
+
|
102 |
+
[Paper](https://huggingface.co/papers/2509.00094) | [Project Page](https://obadx.github.io/prepare-quran-dataset/) | [Code](https://github.com/obadx/recitations-segmenter)
|
103 |
+
|
104 |
+
## Introduction
|
105 |
+
This dataset is developed as part of the research presented in the paper "Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning". The work introduces a 98% automated pipeline to produce high-quality Quranic datasets, comprising over 850 hours of audio (~300K annotated utterances). This dataset supports a novel ASR-based approach for pronunciation error detection, utilizing a custom Quran Phonetic Script (QPS) designed to encode Tajweed rules.
|
106 |
+
|
107 |
+
## Recitation Segmentations Dataset
|
108 |
+
|
109 |
+
This is a modified version of [this dataset](https://huggingface.co/datasets/obadx/recitation-segmentation) with these modifications:
|
110 |
* adding augmentation to the speed of the recitations utterance with column `speed` reflects the speed from 0.8 to 1.5 on 40% of the dataset using [audumentations](https://iver56.github.io/audiomentations/).
|
111 |
* adding data augmentation with [audiomentations](https://iver56.github.io/audiomentations/) on 40% of the dataset to prepare it for training the recitations spliter.
|
112 |
|
113 |
The codes for building this dataset is available at [github](https://github.com/obadx/recitations-segmenter)
|
114 |
+
|
115 |
+
## Results
|
116 |
+
The model trained with this dataset achieved the following results on an unseen test set:
|
117 |
+
|
118 |
+
| Metric | Value |
|
119 |
+
|-----------|--------|
|
120 |
+
| Accuracy | 0.9958 |
|
121 |
+
| F1 | 0.9964 |
|
122 |
+
| Loss | 0.0132 |
|
123 |
+
| Precision | 0.9976 |
|
124 |
+
| Recall | 0.9951 |
|
125 |
+
|
126 |
+
## Sample Usage
|
127 |
+
|
128 |
+
Below is a Python example demonstrating how to use the `recitations-segmenter` library (developed alongside this dataset) to segment Holy Quran recitations.
|
129 |
+
|
130 |
+
First, ensure you have the necessary Python packages and `ffmpeg`/`libsndfile` installed:
|
131 |
+
|
132 |
+
#### Linux
|
133 |
+
|
134 |
+
```bash
|
135 |
+
sudo apt-get update
|
136 |
+
sudo apt-get install -y ffmpeg libsndfile1 portaudio19-dev
|
137 |
+
```
|
138 |
+
|
139 |
+
#### Winodws & Mac
|
140 |
+
|
141 |
+
You can create an `anaconda` environment and then download these two libraries:
|
142 |
+
|
143 |
+
```bash
|
144 |
+
conda create -n segment python=3.12
|
145 |
+
conda activate segment
|
146 |
+
conda install -c conda-forge ffmpeg libsndfile
|
147 |
+
```
|
148 |
+
|
149 |
+
Install the library using pip:
|
150 |
+
```bash
|
151 |
+
pip install recitations-segmenter
|
152 |
+
```
|
153 |
+
|
154 |
+
Then, you can run the following Python script:
|
155 |
+
|
156 |
+
```python
|
157 |
+
from pathlib import Path
|
158 |
+
|
159 |
+
from recitations_segmenter import segment_recitations, read_audio, clean_speech_intervals
|
160 |
+
from transformers import AutoFeatureExtractor, AutoModelForAudioFrameClassification
|
161 |
+
import torch
|
162 |
+
|
163 |
+
if __name__ == '__main__':
|
164 |
+
device = torch.device('cuda')
|
165 |
+
dtype = torch.bfloat16
|
166 |
+
|
167 |
+
processor = AutoFeatureExtractor.from_pretrained(
|
168 |
+
"obadx/recitation-segmenter-v2")
|
169 |
+
model = AutoModelForAudioFrameClassification.from_pretrained(
|
170 |
+
"obadx/recitation-segmenter-v2",
|
171 |
+
)
|
172 |
+
|
173 |
+
model.to(device, dtype=dtype)
|
174 |
+
|
175 |
+
# Change this to the file pathes of Holy Quran recitations
|
176 |
+
# File pathes with the Holy Quran Recitations
|
177 |
+
file_pathes = [
|
178 |
+
'./assets/dussary_002282.mp3',
|
179 |
+
'./assets/hussary_053001.mp3',
|
180 |
+
]
|
181 |
+
waves = [read_audio(p) for p in file_pathes]
|
182 |
+
|
183 |
+
# Extracting speech inervals in samples according to 16000 Sample rate
|
184 |
+
sampled_outputs = segment_recitations(
|
185 |
+
waves,
|
186 |
+
model,
|
187 |
+
processor,
|
188 |
+
device=device,
|
189 |
+
dtype=dtype,
|
190 |
+
batch_size=8,
|
191 |
+
)
|
192 |
+
|
193 |
+
for out, path in zip(sampled_outputs, file_pathes):
|
194 |
+
# Clean The speech intervals by:
|
195 |
+
# * merging small silence durations
|
196 |
+
# * remove small speech durations
|
197 |
+
# * add padding to each speech duration
|
198 |
+
# Raises:
|
199 |
+
# * NoSpeechIntervals: if the wav is complete silence
|
200 |
+
# * TooHighMinSpeechDruation: if `min_speech_duration` is too high which
|
201 |
+
# resuls for deleting all speech intervals
|
202 |
+
clean_out = clean_speech_intervals(
|
203 |
+
out.speech_intervals,
|
204 |
+
out.is_complete,
|
205 |
+
min_silence_duration_ms=30,
|
206 |
+
min_speech_duration_ms=30,
|
207 |
+
pad_duration_ms=30,
|
208 |
+
return_seconds=True,
|
209 |
+
)
|
210 |
+
|
211 |
+
print(f'Speech Intervals of: {Path(path).name}: ')
|
212 |
+
print(clean_out.clean_speech_intervals)
|
213 |
+
print(f'Is Recitation Complete: {clean_out.is_complete}')
|
214 |
+
print('-' * 40)
|
215 |
+
```
|
216 |
+
|
217 |
+
## License
|
218 |
+
|
219 |
+
This dataset is licensed under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|