Update index.html
Browse files- index.html +38 -44
index.html
CHANGED
@@ -80,19 +80,21 @@
|
|
80 |
</p>
|
81 |
<p><strong>Example of Mispronunciation:</strong></p>
|
82 |
<ul>
|
83 |
-
<li><strong>Reference Sequence:</strong> <code
|
84 |
-
<li><strong>
|
85 |
<li>
|
86 |
-
<strong>Annotated
|
87 |
-
<code
|
88 |
</li>
|
89 |
</ul>
|
90 |
<p>
|
91 |
In this case, the phoneme <code>$</code> was mispronounced as <code>s</code>, and <code>i</code> was mispronounced as <code>u</code>.
|
92 |
</p>
|
|
|
|
|
|
|
93 |
|
94 |
-
|
95 |
-
<h2>Research Directions</h2>
|
96 |
<ol>
|
97 |
<li>
|
98 |
<strong>Advanced Mispronunciation Detection Models</strong><br>
|
@@ -111,42 +113,9 @@
|
|
111 |
Perform statistical analysis on the QuranMB dataset to identify prevalent errors (e.g., substituting similar phonemes, swapping vowels).
|
112 |
These insights can drive targeted training and tailored feedback rules.
|
113 |
</li>
|
114 |
-
|
115 |
-
<strong>Integration with Tajwīd Rules</strong><br>
|
116 |
-
Incorporate classical Tajwīd rules (e.g., madd, qalqalah, ikhfa͑) into the detection pipeline so that feedback not only flags errors but also explains the correct recitation rule.
|
117 |
-
</li>
|
118 |
-
<li>
|
119 |
-
<strong>Adaptive Learning Paths</strong><br>
|
120 |
-
Design a system that adapts the sequence of verses based on each user’s error patterns—focusing on the next set of verses that emphasize their weak phonemes.
|
121 |
-
</li>
|
122 |
-
</ol>
|
123 |
-
|
124 |
-
<h2>References</h2>
|
125 |
-
<ul>
|
126 |
-
<li>
|
127 |
-
El Kheir, Y., et al.
|
128 |
-
"<a href="https://arxiv.org/abs/2211.00923" target="_blank">SpeechBlender: Speech Augmentation Framework for Mispronunciation Data Generation</a>,"
|
129 |
-
<em>arXiv preprint arXiv:2211.00923</em>, 2022.
|
130 |
-
</li>
|
131 |
-
<li>
|
132 |
-
Al Harere, A., & Al Jallad, K.
|
133 |
-
"<a href="https://arxiv.org/abs/2305.06429" target="_blank">Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning</a>,"
|
134 |
-
<em>arXiv preprint arXiv:2305.06429</em>, 2023.
|
135 |
-
</li>
|
136 |
-
<li>
|
137 |
-
Aly, S. A., et al.
|
138 |
-
"<a href="https://arxiv.org/abs/2111.01136" target="_blank">ASMDD: Arabic Speech Mispronunciation Detection Dataset</a>,"
|
139 |
-
<em>arXiv preprint arXiv:2111.01136</em>, 2021.
|
140 |
-
</li>
|
141 |
-
<li>
|
142 |
-
Moustafa, A., & Aly, S. A.
|
143 |
-
"<a href="https://arxiv.org/abs/2111.06331" target="_blank">Towards an Efficient Voice Identification Using Wav2Vec2.0 and HuBERT Based on the Quran Reciters Dataset</a>,"
|
144 |
-
<em>arXiv preprint arXiv:2111.06331</em>, 2021.
|
145 |
-
</li>
|
146 |
-
</ul>
|
147 |
-
|
148 |
|
149 |
-
<h2>Dataset Description</h2>
|
150 |
<p>
|
151 |
All data are hosted on Hugging Face. Two main splits are provided:
|
152 |
</p>
|
@@ -180,7 +149,7 @@
|
|
180 |
</p>
|
181 |
|
182 |
<!-- Additional TTS Data -->
|
183 |
-
<h2>TTS Data (Optional Use)</h2>
|
184 |
<p>
|
185 |
We also provide a high-quality TTS corpus for auxiliary experiments (e.g., data augmentation, synthetic pronunciation error simulation). This TTS set can be loaded via:
|
186 |
</p>
|
@@ -188,7 +157,7 @@
|
|
188 |
<li><code>df_tts = load_dataset("IqraEval/Iqra_TTS")</code></li>
|
189 |
</ul>
|
190 |
|
191 |
-
<h2>Test
|
192 |
<p>
|
193 |
To construct a reliable test set, we select 98 verses from the Qur’an, which are read aloud by 18 native Arabic speakers (14 females, 4 males), resulting in approximately 2 hours of recorded speech. The speakers were instructed to read the text in MSA at their normal tempo, disregarding Qur’anic tajweed rules, while deliberately producing the specified pronunciation errors. To ensure consistency in error production, we developed a custom recording tool that highlighted the modified text and displayed additional instructions specifying the type of error. Before recording, speakers were required to silently read each sentence to familiarize themselves with the intended errors before reading them aloud. After recording, three linguistic annotators verified and corrected the phoneme sequence and flagged all pronunciation errors for evaluation.
|
194 |
</p>
|
@@ -263,12 +232,37 @@ ID,Labels
|
|
263 |
<li>Submit a single CSV file (no archives). Filename must be <code>teamID_submission.csv</code>.</li>
|
264 |
</ul>
|
265 |
</p>
|
266 |
-
|
267 |
<!-- Placeholder for Future Details -->
|
268 |
<h2>Future Updates</h2>
|
269 |
<p>
|
270 |
Further details on <strong>evaluation criteria</strong> (exact scoring weights), <strong>submission templates</strong>, and any clarifications will be posted on the shared task website when test data are released (June 5, 2025). Stay tuned!
|
271 |
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
272 |
</div>
|
273 |
</body>
|
274 |
</html>
|
|
|
80 |
</p>
|
81 |
<p><strong>Example of Mispronunciation:</strong></p>
|
82 |
<ul>
|
83 |
+
<li><strong>Reference Phoneme Sequence:</strong> <code>< i n n a SS A f aa w a l m a r w a t a m i n $ a E a a < i r i l l a h i</code></li>
|
84 |
+
<li><strong>Model Phoneme Prediction:</strong> <code>< i n n a SS A f aa w a l m a r w a t a m i n s a E a a < i r u l l a h i</code></li>
|
85 |
<li>
|
86 |
+
<strong>Annotated Phoneme Sequence:</strong>
|
87 |
+
<code>< i n n a SS A f aa w a l m a r w a m i n <span class="highlight">s</span> a E a a < i <span class="highlight">r u</span> l l a h i</code>
|
88 |
</li>
|
89 |
</ul>
|
90 |
<p>
|
91 |
In this case, the phoneme <code>$</code> was mispronounced as <code>s</code>, and <code>i</code> was mispronounced as <code>u</code>.
|
92 |
</p>
|
93 |
+
<p>
|
94 |
+
The annotated phoneme sequence indicates that the phoneme <code>ta</code> was omitted, but the model failed to detect it.
|
95 |
+
</p>
|
96 |
|
97 |
+
<h2>Potential Research Directions</h2>
|
|
|
98 |
<ol>
|
99 |
<li>
|
100 |
<strong>Advanced Mispronunciation Detection Models</strong><br>
|
|
|
113 |
Perform statistical analysis on the QuranMB dataset to identify prevalent errors (e.g., substituting similar phonemes, swapping vowels).
|
114 |
These insights can drive targeted training and tailored feedback rules.
|
115 |
</li>
|
116 |
+
</ol>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
117 |
|
118 |
+
<h2>Training Dataset: Description</h2>
|
119 |
<p>
|
120 |
All data are hosted on Hugging Face. Two main splits are provided:
|
121 |
</p>
|
|
|
149 |
</p>
|
150 |
|
151 |
<!-- Additional TTS Data -->
|
152 |
+
<h2>Training Dataset: TTS Data (Optional Use)</h2>
|
153 |
<p>
|
154 |
We also provide a high-quality TTS corpus for auxiliary experiments (e.g., data augmentation, synthetic pronunciation error simulation). This TTS set can be loaded via:
|
155 |
</p>
|
|
|
157 |
<li><code>df_tts = load_dataset("IqraEval/Iqra_TTS")</code></li>
|
158 |
</ul>
|
159 |
|
160 |
+
<h2>Test Dataset: QuranMB_v2</h2>
|
161 |
<p>
|
162 |
To construct a reliable test set, we select 98 verses from the Qur’an, which are read aloud by 18 native Arabic speakers (14 females, 4 males), resulting in approximately 2 hours of recorded speech. The speakers were instructed to read the text in MSA at their normal tempo, disregarding Qur’anic tajweed rules, while deliberately producing the specified pronunciation errors. To ensure consistency in error production, we developed a custom recording tool that highlighted the modified text and displayed additional instructions specifying the type of error. Before recording, speakers were required to silently read each sentence to familiarize themselves with the intended errors before reading them aloud. After recording, three linguistic annotators verified and corrected the phoneme sequence and flagged all pronunciation errors for evaluation.
|
163 |
</p>
|
|
|
232 |
<li>Submit a single CSV file (no archives). Filename must be <code>teamID_submission.csv</code>.</li>
|
233 |
</ul>
|
234 |
</p>
|
235 |
+
|
236 |
<!-- Placeholder for Future Details -->
|
237 |
<h2>Future Updates</h2>
|
238 |
<p>
|
239 |
Further details on <strong>evaluation criteria</strong> (exact scoring weights), <strong>submission templates</strong>, and any clarifications will be posted on the shared task website when test data are released (June 5, 2025). Stay tuned!
|
240 |
</p>
|
241 |
+
|
242 |
+
<h2>References</h2>
|
243 |
+
<ul>
|
244 |
+
<li>
|
245 |
+
El Kheir, Y., et al.
|
246 |
+
"<a href="https://arxiv.org/abs/2211.00923" target="_blank">SpeechBlender: Speech Augmentation Framework for Mispronunciation Data Generation</a>,"
|
247 |
+
<em>arXiv preprint arXiv:2211.00923</em>, 2022.
|
248 |
+
</li>
|
249 |
+
<li>
|
250 |
+
Al Harere, A., & Al Jallad, K.
|
251 |
+
"<a href="https://arxiv.org/abs/2305.06429" target="_blank">Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning</a>,"
|
252 |
+
<em>arXiv preprint arXiv:2305.06429</em>, 2023.
|
253 |
+
</li>
|
254 |
+
<li>
|
255 |
+
Aly, S. A., et al.
|
256 |
+
"<a href="https://arxiv.org/abs/2111.01136" target="_blank">ASMDD: Arabic Speech Mispronunciation Detection Dataset</a>,"
|
257 |
+
<em>arXiv preprint arXiv:2111.01136</em>, 2021.
|
258 |
+
</li>
|
259 |
+
<li>
|
260 |
+
Moustafa, A., & Aly, S. A.
|
261 |
+
"<a href="https://arxiv.org/abs/2111.06331" target="_blank">Towards an Efficient Voice Identification Using Wav2Vec2.0 and HuBERT Based on the Quran Reciters Dataset</a>,"
|
262 |
+
<em>arXiv preprint arXiv:2111.06331</em>, 2021.
|
263 |
+
</li>
|
264 |
+
</ul>
|
265 |
+
|
266 |
</div>
|
267 |
</body>
|
268 |
</html>
|