|
<!doctype html> |
|
<html> |
|
<head> |
|
<meta charset="utf-8" /> |
|
<meta name="viewport" content="width=device-width" /> |
|
<title>Iqra’Eval Shared Task</title> |
|
<link rel="stylesheet" href="style.css" /> |
|
</head> |
|
<body> |
|
<div class="card"> |
|
<h1>Iqra’Eval Shared Task</h1> |
|
|
|
<div style="text-align:center; margin: 20px 0;"> |
|
<img src="IqraEval.png" alt="" style="max-width:100%; height:auto;" /> |
|
</div> |
|
|
|
|
|
<h2>Overview</h2> |
|
<p> |
|
<strong>Iqra’Eval</strong> is a shared task aimed at advancing <strong>automatic assessment of Qur’anic recitation pronunciation</strong> by leveraging computational methods to detect and diagnose pronunciation errors. The focus on Qur’anic recitation provides a standardized and well-defined context for evaluating Modern Standard Arabic (MSA) pronunciation, where precise articulation is not only valued but essential for correctness according to established Tajweed rules. |
|
</p> |
|
<p> |
|
Participants will develop systems capable of: |
|
</p> |
|
<ul> |
|
<li>Detecting whether a segment of Qur’anic recitation contains pronunciation errors.</li> |
|
<li>Diagnosing the nature of the error (e.g., substitution, deletion, or insertion of phonemes).</li> |
|
</ul> |
|
|
|
|
|
<h2>Timeline</h2> |
|
<ul> |
|
<li><strong>June 1, 2025</strong>: Official announcement of the shared task</li> |
|
<li><strong>June 8, 2025</strong>: Release of training data, development set (QuranMB), phonetizer script, and baseline systems</li> |
|
<li><strong>July 24, 2025</strong>: Registration deadline and release of test data</li> |
|
<li><strong>July 27, 2025</strong>: End of evaluation cycle (test set submission closes)</li> |
|
<li><strong>July 30, 2025</strong>: Final results released</li> |
|
<li><strong>August 15, 2025</strong>: System description paper submissions due</li> |
|
<li><strong>August 22, 2025</strong>: Notification of acceptance</li> |
|
<li><strong>September 5, 2025</strong>: Camera-ready versions due</li> |
|
</ul> |
|
|
|
|
|
|
|
<h2>Task Description: Quranic Mispronunciation Detection System</h2> |
|
|
|
<p> |
|
The aim is to design a model to detect and provide detailed feedback on mispronunciations in Quranic recitations. |
|
Users read aloud vowelized Quranic verses; This model predicts the phoneme sequence uttered by the speaker, which may contain mispronunciations. |
|
Models are evaluated on the <strong>QuranMB.v2</strong> dataset, which contains human‐annotated mispronunciations. |
|
</p> |
|
|
|
<div class="centered"> |
|
<img src="task.png" alt="System Overview" style="max-width:100%; height:auto;" /> |
|
<p><em>Figure: Overview of the Mispronunciation Detection Workflow</em></p> |
|
</div> |
|
|
|
<h3>1. Read the Verse</h3> |
|
<p> |
|
The user is shown a <strong>Reference Verse</strong> in Arabic script along with its corresponding <strong>Reference Phoneme Sequence</strong>. |
|
</p> |
|
<p><strong>Example:</strong></p> |
|
<ul> |
|
<li><strong>Arabic:</strong> إِنَّ الصَّفَا وَالْمَرْوَةَ مِنْ شَعَائِرِ اللَّهِ</li> |
|
<li> |
|
<strong>Phoneme:</strong> |
|
<code>< i n n a SS A f aa w a l m a r w a t a m i n $ a E a a < i r i l l a h i</code> |
|
</li> |
|
</ul> |
|
|
|
<h3>2. Save Recording</h3> |
|
<p> |
|
The user recites the verse aloud; the system captures and stores the audio waveform for subsequent analysis. |
|
</p> |
|
|
|
<h3>3. Mispronunciation Detection</h3> |
|
<p> |
|
The stored audio is fed into a <strong>Mispronunciation Detection Model</strong>. |
|
This model predicts the phoneme sequence uttered by the speaker, which may contain mispronunciations. |
|
</p> |
|
<p><strong>Example of Mispronunciation:</strong></p> |
|
<ul> |
|
<li><strong>Reference Sequence:</strong> <code>... m i n $ a E a a < i r i l l a h i</code></li> |
|
<li><strong>User’s Pronunciation:</strong> <code>... m i n s a E a a < i r u l l a h i</code></li> |
|
<li> |
|
<strong>Annotated Feedback:</strong> |
|
<code>... m i n <span class="highlight">s</span> a E a a < i <span class="highlight">r u</span> l l a h i</code> |
|
</li> |
|
</ul> |
|
<p> |
|
In this case, the phoneme <code>$</code> was mispronounced as <code>s</code>, and <code>i</code> was mispronounced as <code>u</code>. |
|
</p> |
|
|
|
|
|
<h2>Research Directions</h2> |
|
<ol> |
|
<li> |
|
<strong>Advanced Mispronunciation Detection Models</strong><br> |
|
Apply state-of-the-art self-supervised models (e.g., |
|
<a href="https://arxiv.org/abs/2111.06331" target="_blank">Wav2Vec2.0</a>, HuBERT) |
|
pre-trained on Arabic speech. These models can be fine-tuned on Quranic recitations to improve phoneme-level accuracy. |
|
</li> |
|
<li> |
|
<strong>Data Augmentation Strategies</strong><br> |
|
Create synthetic mispronunciation examples using pipelines like |
|
<a href="https://arxiv.org/abs/2211.00923" target="_blank">SpeechBlender</a>. |
|
Augmenting limited Arabic/Quranic speech data helps mitigate data scarcity and improves model robustness. |
|
</li> |
|
<li> |
|
<strong>Analysis of Common Mispronunciation Patterns</strong><br> |
|
Perform statistical analysis on the QuranMB dataset to identify prevalent errors (e.g., substituting similar phonemes, swapping vowels). |
|
These insights can drive targeted training and tailored feedback rules. |
|
</li> |
|
<li> |
|
<strong>Integration with Tajwīd Rules</strong><br> |
|
Incorporate classical Tajwīd rules (e.g., madd, qalqalah, ikhfa͑) into the detection pipeline so that feedback not only flags errors but also explains the correct recitation rule. |
|
</li> |
|
<li> |
|
<strong>Adaptive Learning Paths</strong><br> |
|
Design a system that adapts the sequence of verses based on each user’s error patterns—focusing on the next set of verses that emphasize their weak phonemes. |
|
</li> |
|
</ol> |
|
|
|
<h2>References</h2> |
|
<ul> |
|
<li> |
|
El Kheir, Y., et al. |
|
"<a href="https://arxiv.org/abs/2211.00923" target="_blank">SpeechBlender: Speech Augmentation Framework for Mispronunciation Data Generation</a>," |
|
<em>arXiv preprint arXiv:2211.00923</em>, 2022. |
|
</li> |
|
<li> |
|
Al Harere, A., & Al Jallad, K. |
|
"<a href="https://arxiv.org/abs/2305.06429" target="_blank">Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning</a>," |
|
<em>arXiv preprint arXiv:2305.06429</em>, 2023. |
|
</li> |
|
<li> |
|
Aly, S. A., et al. |
|
"<a href="https://arxiv.org/abs/2111.01136" target="_blank">ASMDD: Arabic Speech Mispronunciation Detection Dataset</a>," |
|
<em>arXiv preprint arXiv:2111.01136</em>, 2021. |
|
</li> |
|
<li> |
|
Moustafa, A., & Aly, S. A. |
|
"<a href="https://arxiv.org/abs/2111.06331" target="_blank">Towards an Efficient Voice Identification Using Wav2Vec2.0 and HuBERT Based on the Quran Reciters Dataset</a>," |
|
<em>arXiv preprint arXiv:2111.06331</em>, 2021. |
|
</li> |
|
</ul> |
|
|
|
|
|
<h2>Dataset Description</h2> |
|
<p> |
|
All data are hosted on Hugging Face. Two main splits are provided: |
|
</p> |
|
<ul> |
|
<li> |
|
<strong>Training set:</strong> 79 hours of Modern Standard Arabic (MSA) speech, augmented with multiple Qur’anic recitations. |
|
<br /> |
|
<code>df = load_dataset("IqraEval/Iqra_train", split="train")</code> |
|
</li> |
|
<li> |
|
<strong>Development set:</strong> 3.4 hours reserved for tuning and validation. |
|
<br /> |
|
<code>df = load_dataset("IqraEval/Iqra_train", split="dev")</code> |
|
</li> |
|
</ul> |
|
<p> |
|
<strong>Column Definitions:</strong> |
|
</p> |
|
<ul> |
|
<li><code>audio</code>: Speech Array.</li> |
|
<li><code>sentence</code>: Original sentence text (may be partially diacritized or non-diacritized).</li> |
|
<li><code>index</code>: If from the Quran, the verse index (0–6265, including Basmalah); otherwise <code>-1</code>.</li> |
|
<li><code>tashkeel_sentence</code>: Fully diacritized sentence (auto-generated via a diacritization tool).</li> |
|
<li><code>phoneme</code>: Phoneme sequence corresponding to the diacritized sentence (Nawar Halabi phonetizer).</li> |
|
</ul> |
|
<p> |
|
<strong>Data Splits:</strong> |
|
<br /> |
|
• Training (train): 79 hours total<br /> |
|
• Development (dev): 3.4 hours total |
|
</p> |
|
|
|
|
|
<h2>TTS Data (Optional Use)</h2> |
|
<p> |
|
We also provide a high-quality TTS corpus for auxiliary experiments (e.g., data augmentation, synthetic pronunciation error simulation). This TTS set can be loaded via: |
|
</p> |
|
<ul> |
|
<li><code>df_tts = load_dataset("IqraEval/Iqra_TTS")</code></li> |
|
</ul> |
|
|
|
<h2>Test Data QuranMB</h2> |
|
<p> |
|
To construct a reliable test set, we select 98 verses from the Qur’an, which are read aloud by 18 native Arabic speakers (14 females, 4 males), resulting in approximately 2 hours of recorded speech. The speakers were instructed to read the text in MSA at their normal tempo, disregarding Qur’anic tajweed rules, while deliberately producing the specified pronunciation errors. To ensure consistency in error production, we developed a custom recording tool that highlighted the modified text and displayed additional instructions specifying the type of error. Before recording, speakers were required to silently read each sentence to familiarize themselves with the intended errors before reading them aloud. After recording, three linguistic annotators verified and corrected the phoneme sequence and flagged all pronunciation errors for evaluation. |
|
</p> |
|
<ul> |
|
<li><code>df_test = load_dataset("IqraEval/Iqra_QuranMB_v2")</code></li> |
|
</ul> |
|
|
|
|
|
<h2>Resources</h2> |
|
<ul> |
|
<li> |
|
<a href="https://huggingface.co/datasets/IqraEval/Iqra_train" target="_blank"> |
|
Training & Development Data on Hugging Face |
|
</a> |
|
</li> |
|
<li> |
|
<a href="https://huggingface.co/datasets/IqraEval/Iqra_TTS" target="_blank"> |
|
IqraEval TTS Data on Hugging Face |
|
</a> |
|
</li> |
|
<li> |
|
<a href="https://github.com/Iqra-Eval/interspeech_IqraEval" target="_blank"> |
|
Baseline systems & training scripts (GitHub) |
|
</a> |
|
</li> |
|
</ul> |
|
<p> |
|
<em> |
|
For detailed instructions on data access, phonetizer installation, and baseline usage, please refer to the GitHub README. |
|
</em> |
|
</p> |
|
|
|
<h2>Evaluation Criteria</h2> |
|
<p> |
|
Systems will be scored on their ability to detect and correctly classify phoneme-level errors: |
|
</p> |
|
<ul> |
|
<li><strong>Detection accuracy:</strong> Did the system spot that a phoneme-level error occurred in the segment?</li> |
|
<li><strong>Classification F1-score:</strong> Mispronunciation Detection F1-score</li> |
|
</ul> |
|
<p> |
|
<em>(Detailed evaluation weights and scripts will be made available on June 5, 2025.)</em> |
|
</p> |
|
|
|
|
|
<h2>Submission Details (Draft)</h2> |
|
<p> |
|
Participants are required to submit a CSV file named <code>submission.csv</code> containing the predicted phoneme sequences for each audio sample. The file must have exactly two columns: |
|
</p> |
|
<ul> |
|
<li><strong>ID:</strong> Unique identifier of the audio sample.</li> |
|
<li><strong>Labels:</strong> The predicted phoneme sequence, with each phoneme separated by a single space.</li> |
|
</ul> |
|
<p> |
|
Below is a minimal example illustrating the required format: |
|
</p> |
|
<pre> |
|
ID,Labels |
|
0000_0001, i n n a m a a y a k h a l l a h a m i n ʕ i b a a d i h u l ʕ u l a m |
|
0000_0002, m a a n a n s a k h u m i n i ʕ a a y a t i n |
|
0000_0003, y u k h i k u m u n n u ʔ a u ʔ a m a n a t a n m m i n h u |
|
… |
|
</pre> |
|
<p> |
|
The first column (ID) should match exactly the audio filenames (without extension). The second column (Labels) is the predicted phoneme string. |
|
</p> |
|
<p> |
|
<strong>Important:</strong> |
|
<ul> |
|
<li>Use UTF-8 encoding.</li> |
|
<li>Do not include extra spaces at the start or end of each line.</li> |
|
<li>Submit a single CSV file (no archives). Filename must be <code>teamID_submission.csv</code>.</li> |
|
</ul> |
|
</p> |
|
|
|
|
|
<h2>Future Updates</h2> |
|
<p> |
|
Further details on <strong>evaluation criteria</strong> (exact scoring weights), <strong>submission templates</strong>, and any clarifications will be posted on the shared task website when test data are released (June 5, 2025). Stay tuned! |
|
</p> |
|
</div> |
|
</body> |
|
</html> |
|
|
|
|