Update index.html
Browse files- index.html +242 -327
index.html
CHANGED
@@ -1,336 +1,251 @@
|
|
1 |
<!doctype html>
|
2 |
-
<html>
|
3 |
<head>
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
</head>
|
9 |
<body>
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
<!-- Overview Section -->
|
18 |
-
<h2>Overview</h2>
|
19 |
-
<p>
|
20 |
-
<strong>Iqra'Eval</strong> is a shared task aimed at advancing <strong>automatic assessment of Qur’anic recitation pronunciation</strong> by leveraging computational methods to detect and diagnose pronunciation errors. The focus on Qur’anic recitation provides a standardized and well-defined context for evaluating Modern Standard Arabic (MSA) pronunciation.
|
21 |
-
</p>
|
22 |
-
<p>
|
23 |
-
Participants will develop systems capable of Detecting Mispronunciations (e.g., substitution, deletion, or insertion of phonemes).
|
24 |
-
</p>
|
25 |
-
|
26 |
-
<!-- Timeline Section -->
|
27 |
-
<h2>Timeline</h2>
|
28 |
-
<ul>
|
29 |
-
<li><strong>June 1, 2025</strong>: Official announcement of the shared task</li>
|
30 |
-
<li><strong>June 10, 2025</strong>: Release of training data, development set (QuranMB), phonetizer script, and baseline systems</li>
|
31 |
-
<li><strong>July 24, 2025</strong>: Registration deadline and release of test data</li>
|
32 |
-
<li><strong>July 27, 2025</strong>: End of evaluation cycle (test set submission closes)</li>
|
33 |
-
<li><strong>July 30, 2025</strong>: Final results released</li>
|
34 |
-
<li><strong>August 15, 2025</strong>: System description paper submissions due</li>
|
35 |
-
<li><strong>August 22, 2025</strong>: Notification of acceptance</li>
|
36 |
-
<li><strong>September 5, 2025</strong>: Camera-ready versions due</li>
|
37 |
-
</ul>
|
38 |
-
|
39 |
-
<!-- Task Description -->
|
40 |
-
|
41 |
-
<h2>Task Description: Quranic Mispronunciation Detection System</h2>
|
42 |
-
|
43 |
-
<p>
|
44 |
-
The aim is to design a model to detect and provide detailed feedback on mispronunciations in Quranic recitations.
|
45 |
-
Users read aloud vowelized Quranic verses; This model predicts the phoneme sequence uttered by the speaker, which may contain mispronunciations.
|
46 |
-
Models are evaluated on the <strong>QuranMB.v2</strong> dataset, which contains human‐annotated mispronunciations.
|
47 |
-
</p>
|
48 |
-
|
49 |
-
<div class="centered">
|
50 |
-
<img src="task.png" alt="System Overview" style="max-width:100%; height:auto;" />
|
51 |
-
<p><em>Figure: Overview of the Mispronunciation Detection Workflow</em></p>
|
52 |
-
</div>
|
53 |
-
|
54 |
-
<h3>1. Read the Verse</h3>
|
55 |
-
<p>
|
56 |
-
The user is shown a <strong>Reference Verse</strong> (What should have been said) in Arabic script along with its corresponding <strong>Reference Phoneme Sequence</strong>.
|
57 |
-
</p>
|
58 |
-
<p><strong>Example:</strong></p>
|
59 |
-
<ul>
|
60 |
-
<li><strong>Arabic:</strong> إِنَّ الصَّفَا وَالْمَرْوَةَ مِنْ شَعَائِرِ اللَّهِ</li>
|
61 |
-
<li>
|
62 |
-
<strong>Phoneme:</strong>
|
63 |
-
<code>< i n n a SS A f aa w a l m a r w a t a m i n $ a E a a < i r i l l a h i</code>
|
64 |
-
</li>
|
65 |
-
</ul>
|
66 |
-
|
67 |
-
<h3>2. Save Recording</h3>
|
68 |
-
<p>
|
69 |
-
The user recites the verse aloud; the system captures and stores the audio waveform for subsequent analysis.
|
70 |
-
</p>
|
71 |
-
|
72 |
-
<h3>3. Mispronunciation Detection</h3>
|
73 |
-
<p>
|
74 |
-
The stored audio is fed into a <strong>Mispronunciation Detection Model</strong>.
|
75 |
-
This model predicts the phoneme sequence uttered by the speaker, which may contain mispronunciations.
|
76 |
-
</p>
|
77 |
-
<p><strong>Example of Mispronunciation:</strong></p>
|
78 |
-
<ul>
|
79 |
-
<li><strong>Reference Phoneme Sequence (What should have been said):</strong> <code>< i n n a SS A f aa w a l m a r w a t a m i n $ a E a a < i r i l l a h i</code></li>
|
80 |
-
<li><strong>Model Phoneme Prediction (What is predicted):</strong> <code>< i n n a SS A f aa w a l m a r w a t a m i n s a E a a < i r u l l a h i</code></li>
|
81 |
-
<li>
|
82 |
-
<strong>Annotated Phoneme Sequence (What is said):</strong>
|
83 |
-
<code>< i n n a SS A f aa w a l m a r w a m i n <span class="highlight">s</span> a E a a < i <span class="highlight">r u</span> l l a h i</code>
|
84 |
-
</li>
|
85 |
-
</ul>
|
86 |
-
<p>
|
87 |
-
In this case, the phoneme <code>$</code> was mispronounced as <code>s</code>, and <code>i</code> was mispronounced as <code>u</code>.
|
88 |
-
</p>
|
89 |
-
<p>
|
90 |
-
The annotated phoneme sequence indicates that the phoneme <code>ta</code> was omitted, but the model failed to detect it.
|
91 |
-
</p>
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
<h2>Training Dataset: Description</h2>
|
96 |
-
<p>
|
97 |
-
All data are hosted on Hugging Face. Two main splits are provided:
|
98 |
-
</p>
|
99 |
-
<ul>
|
100 |
-
<li>
|
101 |
-
<strong>Training set:</strong> 79 hours of Modern Standard Arabic (MSA) speech, augmented with multiple Qur’anic recitations.
|
102 |
-
<br />
|
103 |
-
<code>df = load_dataset("IqraEval/Iqra_train", split="train")</code>
|
104 |
-
</li>
|
105 |
-
<li>
|
106 |
-
<strong>Development set:</strong> 3.4 hours reserved for tuning and validation.
|
107 |
-
<br />
|
108 |
-
<code>df = load_dataset("IqraEval/Iqra_train", split="dev")</code>
|
109 |
-
</li>
|
110 |
-
</ul>
|
111 |
-
<p>
|
112 |
-
<strong>Column Definitions:</strong>
|
113 |
-
</p>
|
114 |
-
<ul>
|
115 |
-
<li><code>audio</code>: Speech Array.</li>
|
116 |
-
<li><code>sentence</code>: Original sentence text (may be partially diacritized or non-diacritized).</li>
|
117 |
-
<li><code>index</code>: If from the Quran, the verse index (0–6265, including Basmalah); otherwise <code>-1</code>.</li>
|
118 |
-
<li><code>tashkeel_sentence</code>: Fully diacritized sentence (auto-generated via a diacritization tool).</li>
|
119 |
-
<li><code>phoneme</code>: Phoneme sequence corresponding to the diacritized sentence (Nawar Halabi phonetizer).</li>
|
120 |
-
</ul>
|
121 |
-
<p>
|
122 |
-
<strong>Data Splits:</strong>
|
123 |
-
<br />
|
124 |
-
• Training (train): 79 hours total<br />
|
125 |
-
• Development (dev): 3.4 hours total
|
126 |
-
</p>
|
127 |
-
|
128 |
-
<!-- Additional TTS Data -->
|
129 |
-
<h2>Training Dataset: TTS Data (Optional Use)</h2>
|
130 |
-
<p>
|
131 |
-
We also provide a high-quality TTS corpus for auxiliary experiments (e.g., data augmentation, synthetic pronunciation error simulation). This TTS set can be loaded via:
|
132 |
-
</p>
|
133 |
-
<ul>
|
134 |
-
<li><code>df_tts = load_dataset("IqraEval/Iqra_TTS")</code></li>
|
135 |
-
</ul>
|
136 |
-
|
137 |
-
<h2>Test Dataset: QuranMB_v2</h2>
|
138 |
-
<p>
|
139 |
-
To construct a reliable test set, we select 98 verses from the Qur’an, which are read aloud by 18 native Arabic speakers (14 females, 4 males), resulting in approximately 2 hours of recorded speech. The speakers were instructed to read the text in MSA at their normal tempo, disregarding Qur’anic tajweed rules, while deliberately producing the specified pronunciation errors. To ensure consistency in error production, we developed a custom recording tool that highlighted the modified text and displayed additional instructions specifying the type of error. Before recording, speakers were required to silently read each sentence to familiarize themselves with the intended errors before reading them aloud. After recording, three linguistic annotators verified and corrected the phoneme sequence and flagged all pronunciation errors for evaluation.
|
140 |
-
</p>
|
141 |
-
<ul>
|
142 |
-
<li><code>df_test = load_dataset("IqraEval/Iqra_QuranMB_v2")</code></li>
|
143 |
-
</ul>
|
144 |
-
|
145 |
-
<!-- Resources & Links -->
|
146 |
-
<h2>Resources</h2>
|
147 |
-
<ul>
|
148 |
-
<li>
|
149 |
-
<a href="https://github.com/Iqra-Eval/MSA_https://github.com/Iqra-Eval/MSA_phonetiser" target="_blank">
|
150 |
-
Phonetiser script (GitHub)
|
151 |
-
</a>
|
152 |
-
</li>
|
153 |
-
<li>
|
154 |
-
<a href="https://huggingface.co/datasets/IqraEval/Iqra_train" target="_blank">
|
155 |
-
Training & Development Data on Hugging Face
|
156 |
-
</a>
|
157 |
-
</li>
|
158 |
-
<li>
|
159 |
-
<a href="https://huggingface.co/datasets/IqraEval/Iqra_TTS" target="_blank">
|
160 |
-
IqraEval TTS Data on Hugging Face
|
161 |
-
</a>
|
162 |
-
</li>
|
163 |
-
<li>
|
164 |
-
<a href="https://github.com/Iqra-Eval/interspeech_IqraEval" target="_blank">
|
165 |
-
Baseline systems & training scripts (GitHub)
|
166 |
-
</a>
|
167 |
-
</li>
|
168 |
-
</ul>
|
169 |
-
<p>
|
170 |
-
<em>
|
171 |
-
For detailed instructions on data access, phonetizer installation, and baseline usage, please refer to the <a href="https://github.com/Iqra-Eval" target="_blank">
|
172 |
-
GitHub
|
173 |
-
</a>.
|
174 |
-
</em>
|
175 |
-
</p>
|
176 |
-
|
177 |
-
<!-- Submission Details -->
|
178 |
-
<h2>Submission Details (Draft)</h2>
|
179 |
-
<p>
|
180 |
-
Participants are required to submit a CSV file named <code>submission.csv</code> containing the predicted phoneme sequences for each audio sample. The file must have exactly two columns:
|
181 |
-
</p>
|
182 |
-
<ul>
|
183 |
-
<li><strong>ID:</strong> Unique identifier of the audio sample.</li>
|
184 |
-
<li><strong>Labels:</strong> The predicted phoneme sequence, with each phoneme separated by a single space.</li>
|
185 |
-
</ul>
|
186 |
-
<p>
|
187 |
-
Below is a minimal example illustrating the required format:
|
188 |
-
</p>
|
189 |
-
<pre>
|
190 |
-
ID,Labels
|
191 |
-
0000_0001, i n n a m a a y a k h a l l a h a m i n ʕ i b a a d i h u l ʕ u l a m
|
192 |
-
0000_0002, m a a n a n s a k h u m i n i ʕ a a y a t i n
|
193 |
-
0000_0003, y u k h i k u m u n n u ʔ a u ʔ a m a n a t a n m m i n h u
|
194 |
-
…
|
195 |
-
</pre>
|
196 |
-
<p>
|
197 |
-
The first column (ID) should match exactly the audio filenames (without extension). The second column (Labels) is the predicted phoneme string.
|
198 |
-
</p>
|
199 |
-
<p>
|
200 |
-
<strong>Important:</strong>
|
201 |
-
<ul>
|
202 |
-
<li>Use UTF-8 encoding.</li>
|
203 |
-
<li>Do not include extra spaces at the start or end of each line.</li>
|
204 |
-
<li>Submit a single CSV file (no archives). Filename must be <code>teamID_submission.csv</code>.</li>
|
205 |
-
</ul>
|
206 |
-
</p>
|
207 |
-
|
208 |
-
<h2>Evaluation Criteria</h2>
|
209 |
-
<p>
|
210 |
-
IqraEval Leaderboard rankings will primarily be based on the <strong>phoneme-level F1-score</strong>.
|
211 |
-
</p>
|
212 |
-
<p>
|
213 |
-
In addition, we adopt a hierarchical evaluation structure, <a href="https://arxiv.org/pdf/2310.13974" target="_blank">MDD Overview</a>, that breaks down performance into detection and diagnostic phases.
|
214 |
-
</p>
|
215 |
-
|
216 |
-
<p>
|
217 |
-
<strong>Hierarchical Evaluation Structure:</strong>
|
218 |
-
The hierarchical mispronunciation detection process relies on three sequences:
|
219 |
-
<ul>
|
220 |
-
<li><em>What is said</em> (the <strong>annotated phoneme sequence</strong> from human annotation),</li>
|
221 |
-
<li><em>What is predicted</em> (the <strong>model’s phoneme output</strong>),</li>
|
222 |
-
<li><em>What should have been said</em> (the <strong>reference phoneme sequence</strong>).</li>
|
223 |
-
</ul>
|
224 |
-
By comparing these three sequences, we compute the following counts:
|
225 |
-
</p>
|
226 |
-
<ul>
|
227 |
-
<li><strong>True Acceptance (TA):</strong>
|
228 |
-
Number of phonemes that are annotated as correct and also recognized as correct by the model.
|
229 |
-
</li>
|
230 |
-
<li><strong>True Rejection (TR):</strong>
|
231 |
-
Number of phonemes that are annotated as mispronunciations and correctly predicted as mispronunciations.
|
232 |
-
(These labels are further used to measure diagnostic errors by comparing the prediction to the canonical reference.)
|
233 |
-
</li>
|
234 |
-
<li><strong>False Rejection (FR):</strong>
|
235 |
-
Number of phonemes that are annotated as correct but wrongly predicted as mispronunciations.
|
236 |
-
</li>
|
237 |
-
<li><strong>False Acceptance (FA):</strong>
|
238 |
-
Number of phonemes that are annotated as mispronunciations but misclassified as correct pronunciations.
|
239 |
-
</li>
|
240 |
-
</ul>
|
241 |
-
<p>
|
242 |
-
From these counts, we derive three rates:
|
243 |
-
<ul>
|
244 |
-
<li><strong>False Rejection Rate (FRR):</strong>
|
245 |
-
FRR = FR/(TA + FR)
|
246 |
-
(Proportion of correctly pronounced phonemes that were mistakenly flagged as errors.)
|
247 |
-
</li>
|
248 |
-
<li><strong>False Acceptance Rate (FAR):</strong>
|
249 |
-
FAR = FA/(FA + TR)
|
250 |
-
(Proportion of mispronounced phonemes that were mistakenly classified as correct.)
|
251 |
-
</li>
|
252 |
-
<li><strong>Diagnostic Error Rate (DER):</strong>
|
253 |
-
DER = DE/(CD + DE)
|
254 |
-
where DE is the number of misdiagnosed phonemes and CD is the number of correctly diagnosed ones.
|
255 |
-
</li>
|
256 |
-
</ul>
|
257 |
-
</p>
|
258 |
-
<p>
|
259 |
-
In addition to these hierarchical measures, we compute the standard <strong>Precision</strong>, <strong>Recall</strong>, and <strong>F-measure</strong> for mispronunciation detection:
|
260 |
-
<ul>
|
261 |
-
<li><strong>Precision:</strong>
|
262 |
-
Precision = TR/(TR + FR)
|
263 |
-
(Of all phonemes predicted as mispronounced, how many were actually mispronounced?)
|
264 |
-
</li>
|
265 |
-
<li><strong>Recall:</strong>
|
266 |
-
Recall = TR/(TR + FA)
|
267 |
-
(Of all truly mispronounced phonemes, how many did we correctly detect?)
|
268 |
-
</li>
|
269 |
-
<li><strong>F1-score:</strong>
|
270 |
-
F1-score = 2 * Precision * Recall / (Precision + Recall)
|
271 |
-
</li>
|
272 |
-
</ul>
|
273 |
-
</p>
|
274 |
-
|
275 |
-
|
276 |
-
<h2>Potential Research Directions</h2>
|
277 |
-
<ol>
|
278 |
-
<li>
|
279 |
-
<strong>Advanced Mispronunciation Detection Models</strong><br>
|
280 |
-
Apply state-of-the-art self-supervised models (e.g., Wav2Vec2.0, HuBERT), using variants that are pre-trained/fine-tuned on Arabic speech. These models can then be fine-tuned on Quranic recitations to improve phoneme-level accuracy.
|
281 |
-
</li>
|
282 |
-
<li>
|
283 |
-
<strong>Data Augmentation Strategies</strong><br>
|
284 |
-
Create synthetic mispronunciation examples using pipelines like
|
285 |
-
<a href="https://arxiv.org/abs/2211.00923" target="_blank">SpeechBlender</a>.
|
286 |
-
Augmenting limited Arabic/Quranic speech data helps mitigate data scarcity and improves model robustness.
|
287 |
-
</li>
|
288 |
-
<li>
|
289 |
-
<strong>Analysis of Common Mispronunciation Patterns</strong><br>
|
290 |
-
Perform statistical analysis on the QuranMB dataset to identify prevalent errors (e.g., substituting similar phonemes, swapping vowels).
|
291 |
-
These insights can drive targeted training and tailored feedback rules.
|
292 |
-
</li>
|
293 |
-
</ol>
|
294 |
-
|
295 |
-
|
296 |
-
|
297 |
-
<!-- Placeholder for Future Details -->
|
298 |
-
<h2>Future Updates</h2>
|
299 |
-
<p>
|
300 |
-
Further details on <strong>evaluation criteria</strong> (exact scoring weights), <strong>submission templates</strong>, and any clarifications will be posted on the shared task website when test data are released (June 5, 2025). Stay tuned!
|
301 |
-
</p>
|
302 |
-
|
303 |
-
<h2>References</h2>
|
304 |
-
<ul>
|
305 |
-
<li>
|
306 |
-
El Kheir, Y., et al.
|
307 |
-
"<a href="https://arxiv.org/abs/2211.00923" target="_blank">SpeechBlender: Speech Augmentation Framework for Mispronunciation Data Generation</a>,"
|
308 |
-
<em>arXiv preprint arXiv:2211.00923</em>, 2022.
|
309 |
-
</li>
|
310 |
-
<li>
|
311 |
-
Al Harere, A., & Al Jallad, K.
|
312 |
-
"<a href="https://arxiv.org/abs/2305.06429" target="_blank">Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning</a>,"
|
313 |
-
<em>arXiv preprint arXiv:2305.06429</em>, 2023.
|
314 |
-
</li>
|
315 |
-
<li>
|
316 |
-
Aly, S. A., et al.
|
317 |
-
"<a href="https://arxiv.org/abs/2111.01136" target="_blank">ASMDD: Arabic Speech Mispronunciation Detection Dataset</a>,"
|
318 |
-
<em>arXiv preprint arXiv:2111.01136</em>, 2021.
|
319 |
-
</li>
|
320 |
-
<li>
|
321 |
-
Moustafa, A., & Aly, S. A.
|
322 |
-
"<a href="https://arxiv.org/abs/2111.06331" target="_blank">Towards an Efficient Voice Identification Using Wav2Vec2.0 and HuBERT Based on the Quran Reciters Dataset</a>,"
|
323 |
-
<em>arXiv preprint arXiv:2111.06331</em>, 2021.
|
324 |
-
</li>
|
325 |
-
<li>
|
326 |
-
El Kheir, Y., et al.
|
327 |
-
"<a href="https://arxiv.org/pdf/2310.13974" target="_blank">Automatic Pronunciation Assessment - A Review</a>,"
|
328 |
-
<em>arXiv preprint arXiv:2310.13974</em>, 2021.
|
329 |
-
</li>
|
330 |
-
|
331 |
-
</ul>
|
332 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
333 |
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
334 |
</body>
|
335 |
</html>
|
336 |
|
|
|
1 |
<!doctype html>
|
2 |
+
<html lang="en">
|
3 |
<head>
|
4 |
+
<meta charset="utf-8" />
|
5 |
+
<meta name="viewport" content="width=device-width" />
|
6 |
+
<title>Iqra’Eval Shared Task</title>
|
7 |
+
<style>
|
8 |
+
/* Color Palette */
|
9 |
+
:root {
|
10 |
+
--navy-blue: #001f4d;
|
11 |
+
--coral: #ff6f61;
|
12 |
+
--light-gray: #f5f7fa;
|
13 |
+
--text-dark: #222;
|
14 |
+
}
|
15 |
+
|
16 |
+
body {
|
17 |
+
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
|
18 |
+
background-color: var(--light-gray);
|
19 |
+
color: var(--text-dark);
|
20 |
+
margin: 20px;
|
21 |
+
line-height: 1.6;
|
22 |
+
}
|
23 |
+
|
24 |
+
h1, h2, h3 {
|
25 |
+
color: var(--navy-blue);
|
26 |
+
font-weight: 700;
|
27 |
+
margin-top: 1.2em;
|
28 |
+
}
|
29 |
+
|
30 |
+
h1 {
|
31 |
+
text-align: center;
|
32 |
+
font-size: 2.8rem;
|
33 |
+
margin-bottom: 0.3em;
|
34 |
+
}
|
35 |
+
|
36 |
+
h2 {
|
37 |
+
border-bottom: 3px solid var(--coral);
|
38 |
+
padding-bottom: 0.3em;
|
39 |
+
}
|
40 |
+
|
41 |
+
h3 {
|
42 |
+
color: var(--coral);
|
43 |
+
margin-top: 1em;
|
44 |
+
}
|
45 |
+
|
46 |
+
p {
|
47 |
+
max-width: 900px;
|
48 |
+
margin: 0.8em auto;
|
49 |
+
}
|
50 |
+
|
51 |
+
strong {
|
52 |
+
color: var(--navy-blue);
|
53 |
+
}
|
54 |
+
|
55 |
+
ul {
|
56 |
+
max-width: 900px;
|
57 |
+
margin: 0.5em auto 1.5em auto;
|
58 |
+
padding-left: 1.2em;
|
59 |
+
}
|
60 |
+
|
61 |
+
ul li {
|
62 |
+
margin: 0.4em 0;
|
63 |
+
}
|
64 |
+
|
65 |
+
code {
|
66 |
+
background-color: #eef4f8;
|
67 |
+
color: var(--navy-blue);
|
68 |
+
padding: 2px 6px;
|
69 |
+
border-radius: 4px;
|
70 |
+
font-family: Consolas, monospace;
|
71 |
+
font-size: 0.9em;
|
72 |
+
}
|
73 |
+
|
74 |
+
pre {
|
75 |
+
max-width: 900px;
|
76 |
+
background-color: #eef4f8;
|
77 |
+
color: var(--navy-blue);
|
78 |
+
padding: 1em;
|
79 |
+
border-radius: 8px;
|
80 |
+
overflow-x: auto;
|
81 |
+
font-family: Consolas, monospace;
|
82 |
+
font-size: 0.95em;
|
83 |
+
margin: 0.8em auto;
|
84 |
+
}
|
85 |
+
|
86 |
+
a {
|
87 |
+
color: var(--coral);
|
88 |
+
text-decoration: none;
|
89 |
+
}
|
90 |
+
|
91 |
+
a:hover {
|
92 |
+
text-decoration: underline;
|
93 |
+
}
|
94 |
+
|
95 |
+
.card {
|
96 |
+
max-width: 960px;
|
97 |
+
background: white;
|
98 |
+
margin: 0 auto 40px auto;
|
99 |
+
padding: 2em 2.5em;
|
100 |
+
box-shadow: 0 4px 14px rgba(0,0,0,0.1);
|
101 |
+
border-radius: 12px;
|
102 |
+
}
|
103 |
+
|
104 |
+
/* Centering images and captions */
|
105 |
+
div img {
|
106 |
+
display: block;
|
107 |
+
margin: 20px auto;
|
108 |
+
max-width: 100%;
|
109 |
+
height: auto;
|
110 |
+
border-radius: 8px;
|
111 |
+
box-shadow: 0 4px 8px rgba(0,31,77,0.15);
|
112 |
+
}
|
113 |
+
|
114 |
+
.centered p {
|
115 |
+
text-align: center;
|
116 |
+
font-style: italic;
|
117 |
+
color: var(--navy-blue);
|
118 |
+
margin-top: 0.4em;
|
119 |
+
}
|
120 |
+
|
121 |
+
.highlight {
|
122 |
+
color: var(--coral);
|
123 |
+
font-weight: 700;
|
124 |
+
}
|
125 |
+
|
126 |
+
/* Lists inside paragraphs */
|
127 |
+
p > ul {
|
128 |
+
margin-top: 0.3em;
|
129 |
+
}
|
130 |
+
|
131 |
+
</style>
|
132 |
</head>
|
133 |
<body>
|
134 |
+
<div class="card">
|
135 |
+
<h1>Iqra’Eval Shared Task</h1>
|
136 |
+
|
137 |
+
<div>
|
138 |
+
<img src="IqraEval.png" alt="IqraEval Logo" />
|
139 |
+
</div>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
|
141 |
+
<!-- Overview Section -->
|
142 |
+
<h2>Overview</h2>
|
143 |
+
<p>
|
144 |
+
<strong>Iqra'Eval</strong> is a shared task aimed at advancing <strong>automatic assessment of Qur’anic recitation pronunciation</strong> by leveraging computational methods to detect and diagnose pronunciation errors. The focus on Qur’anic recitation provides a standardized and well-defined context for evaluating Modern Standard Arabic (MSA) pronunciation.
|
145 |
+
</p>
|
146 |
+
<p>
|
147 |
+
Participants will develop systems capable of detecting mispronunciations (e.g., substitution, deletion, or insertion of phonemes).
|
148 |
+
</p>
|
149 |
+
|
150 |
+
<!-- Timeline Section -->
|
151 |
+
<h2>Timeline</h2>
|
152 |
+
<ul>
|
153 |
+
<li><strong>June 1, 2025</strong>: Official announcement of the shared task</li>
|
154 |
+
<li><strong>June 10, 2025</strong>: Release of training data, development set (QuranMB), phonetizer script, and baseline systems</li>
|
155 |
+
<li><strong>July 24, 2025</strong>: Registration deadline and release of test data</li>
|
156 |
+
<li><strong>July 27, 2025</strong>: End of evaluation cycle (test set submission closes)</li>
|
157 |
+
<li><strong>July 30, 2025</strong>: Final results released</li>
|
158 |
+
<li><strong>August 15, 2025</strong>: System description paper submissions due</li>
|
159 |
+
<li><strong>August 22, 2025</strong>: Notification of acceptance</li>
|
160 |
+
<li><strong>September 5, 2025</strong>: Camera-ready versions due</li>
|
161 |
+
</ul>
|
162 |
+
|
163 |
+
<!-- Task Description -->
|
164 |
+
<h2>Task Description: Quranic Mispronunciation Detection System</h2>
|
165 |
+
<p>
|
166 |
+
The aim is to design a model to detect and provide detailed feedback on mispronunciations in Quranic recitations.
|
167 |
+
Users read aloud vowelized Quranic verses; this model predicts the phoneme sequence uttered by the speaker, which may contain mispronunciations.
|
168 |
+
Models are evaluated on the <strong>QuranMB.v2</strong> dataset, which contains human‐annotated mispronunciations.
|
169 |
+
</p>
|
170 |
+
|
171 |
+
<div class="centered">
|
172 |
+
<img src="task.png" alt="System Overview" />
|
173 |
+
<p>Figure: Overview of the Mispronunciation Detection Workflow</p>
|
174 |
</div>
|
175 |
+
|
176 |
+
<h3>1. Read the Verse</h3>
|
177 |
+
<p>
|
178 |
+
The user is shown a <strong>Reference Verse</strong> (What should have been said) in Arabic script along with its corresponding <strong>Reference Phoneme Sequence</strong>.
|
179 |
+
</p>
|
180 |
+
<p><strong>Example:</strong></p>
|
181 |
+
<ul>
|
182 |
+
<li><strong>Arabic:</strong> إِنَّ الصَّفَا وَالْمَرْوَةَ مِنْ شَعَائِرِ اللَّهِ</li>
|
183 |
+
<li>
|
184 |
+
<strong>Phoneme:</strong>
|
185 |
+
<code>< i n n a SS A f aa w a l m a r w a t a m i n $ a E a a < i r i l l a h i</code>
|
186 |
+
</li>
|
187 |
+
</ul>
|
188 |
+
|
189 |
+
<h3>2. Save Recording</h3>
|
190 |
+
<p>
|
191 |
+
The user recites the verse aloud; the system captures and stores the audio waveform for subsequent analysis.
|
192 |
+
</p>
|
193 |
+
|
194 |
+
<h3>3. Mispronunciation Detection</h3>
|
195 |
+
<p>
|
196 |
+
The stored audio is fed into a <strong>Mispronunciation Detection Model</strong>.
|
197 |
+
This model predicts the phoneme sequence uttered by the speaker, which may contain mispronunciations.
|
198 |
+
</p>
|
199 |
+
<p><strong>Example of Mispronunciation:</strong></p>
|
200 |
+
<ul>
|
201 |
+
<li><strong>Reference Phoneme Sequence (What should have been said):</strong> <code>< i n n a SS A f aa w a l m a r w a t a m i n $ a E a a < i r i l l a h i</code></li>
|
202 |
+
<li><strong>Model Phoneme Prediction (What is predicted):</strong> <code>< i n n a SS A f aa w a l m a r w a t a m i n s a E a a < i r u l l a h i</code></li>
|
203 |
+
<li>
|
204 |
+
<strong>Annotated Phoneme Sequence (What is said):</strong>
|
205 |
+
<code>< i n n a SS A f aa w a l m a r w <span class="highlight">s</span> a E a a < i <span class="highlight">r u</span> l l a h i</code>
|
206 |
+
</li>
|
207 |
+
</ul>
|
208 |
+
<p>
|
209 |
+
In this case, the phoneme <code>$</code> was mispronounced as <code>s</code>, and <code>i</code> was mispronounced as <code>u</code>.
|
210 |
+
</p>
|
211 |
+
<p>
|
212 |
+
The annotated phoneme sequence indicates that the phoneme <code>ta</code> was omitted, but the model failed to detect it.
|
213 |
+
</p>
|
214 |
+
|
215 |
+
<h2>Training Dataset: Description</h2>
|
216 |
+
<p>
|
217 |
+
All data are hosted on Hugging Face. Two main splits are provided:
|
218 |
+
</p>
|
219 |
+
<ul>
|
220 |
+
<li>
|
221 |
+
<strong>Training set:</strong> 79 hours of Modern Standard Arabic (MSA) Quran recitations (5,167 audio files)
|
222 |
+
</li>
|
223 |
+
<li>
|
224 |
+
<strong>Evaluation set:</strong> QuranMB.v2 dataset with phoneme-level mispronunciation annotations, which includes:
|
225 |
+
<ul>
|
226 |
+
<li>QuranMB-Train: 9 hours (1,218 files) for development</li>
|
227 |
+
<li>QuranMB-Test: 8 hours (1,018 files) for evaluation</li>
|
228 |
+
</ul>
|
229 |
+
</li>
|
230 |
+
</ul>
|
231 |
+
|
232 |
+
<h2>Submission Guidelines</h2>
|
233 |
+
<p>
|
234 |
+
Participants should submit their predicted phoneme sequences on the test set by the deadline (July 27, 2025). Submissions will be automatically evaluated using the official scoring scripts.
|
235 |
+
</p>
|
236 |
+
|
237 |
+
<h2>Evaluation Metrics</h2>
|
238 |
+
<p>
|
239 |
+
Systems will be evaluated based on phoneme error rates (PER) computed over the test set, measuring accuracy in detecting and localizing mispronunciations.
|
240 |
+
</p>
|
241 |
+
|
242 |
+
<h2>Contact and Support</h2>
|
243 |
+
<p>
|
244 |
+
For inquiries and support, reach out to the task coordinators at
|
245 |
+
<a href="mailto:support@iqraeval.org">support@iqraeval.org</a>.
|
246 |
+
</p>
|
247 |
+
|
248 |
+
</div>
|
249 |
</body>
|
250 |
</html>
|
251 |
|