01Yassine's picture
Update index.html
b64074b verified
raw
history blame
12.7 kB
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width" />
<title>Iqra’Eval Shared Task</title>
<style>
:root {
--navy-blue: #001f4d;
--coral: #ff6f61;
--light-gray: #f5f7fa;
--text-dark: #222;
}
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background-color: var(--light-gray);
color: var(--text-dark);
margin: 20px;
line-height: 1.6;
}
h1, h2, h3 {
color: var(--navy-blue);
font-weight: 700;
margin-top: 1.2em;
}
h1 {
text-align: center;
font-size: 2.8rem;
margin-bottom: 0.3em;
}
h2 {
border-bottom: 3px solid var(--coral);
padding-bottom: 0.3em;
}
h3 {
color: var(--coral);
margin-top: 1em;
}
p, ul, pre {
max-width: 900px;
margin: 0.8em auto;
}
ul { padding-left: 1.2em; }
ul li { margin: 0.4em 0; }
code {
background-color: #eef4f8;
color: var(--navy-blue);
padding: 2px 6px;
border-radius: 4px;
font-family: Consolas, monospace;
font-size: 0.9em;
}
pre {
background-color: #eef4f8;
padding: 1em;
border-radius: 8px;
overflow-x: auto;
font-size: 0.95em;
}
a {
color: var(--coral);
text-decoration: none;
}
a:hover { text-decoration: underline; }
.card {
max-width: 960px;
background: white;
margin: 0 auto 40px;
padding: 2em 2.5em;
box-shadow: 0 4px 14px rgba(0,0,0,0.1);
border-radius: 12px;
}
img {
display: block;
margin: 20px auto;
max-width: 100%;
height: auto;
border-radius: 8px;
box-shadow: 0 4px 8px rgba(0,31,77,0.15);
}
.centered p {
text-align: center;
font-style: italic;
color: var(--navy-blue);
margin-top: 0.4em;
}
.highlight {
color: var(--coral);
font-weight: 700;
}
/* nested lists in paragraphs */
p > ul { margin-top: 0.3em; }
</style>
</head>
<body>
<div class="card">
<h1>Iqra’Eval Shared Task</h1>
<img src="IqraEval.png" alt="IqraEval Logo" />
<h2>Overview</h2>
<p>
<strong>Iqra'Eval</strong> is a shared task aimed at advancing <strong>automatic assessment of Qur’anic recitation pronunciation</strong> by leveraging computational methods to detect and diagnose pronunciation errors. The focus on Qur’anic recitation provides a standardized and well-defined context for evaluating Modern Standard Arabic (MSA) pronunciation.
</p>
<p>
Participants will develop systems capable of detecting mispronunciations (e.g., substitution, deletion, or insertion of phonemes).
</p>
<h2>Timeline</h2>
<ul>
<li><strong>June 1, 2025</strong>: Official announcement</li>
<li><strong>June 10, 2025</strong>: Release of training data, dev set, phonetizer, baselines</li>
<li><strong>June 20, 2025</strong>: Opening Leaderboard</li>
<li><strong>July 20, 2025</strong>: Registration deadline</li>
<li><strong>July 24, 2025</strong>: QuranMB test data release</li>
<li><strong>July 29, 2025</strong>: Test set submission closes</li>
<li><strong>July 30, 2025</strong>: Final results released</li>
<li><strong>August 15, 2025</strong>: System description papers due</li>
<li><strong>August 22, 2025</strong>: Notification of acceptance</li>
<li><strong>September 5, 2025</strong>: Camera-ready versions due</li>
</ul>
<h2>Task Description: Quranic Mispronunciation Detection System</h2>
<p>
Design a model to detect and provide detailed feedback on mispronunciations in Quranic recitations. Users read vowelized verses; the model predicts the spoken phoneme sequence and flags deviations. Evaluation is on the <strong>QuranMB.v2</strong> dataset with human‐annotated errors.
</p>
<div class="centered">
<img src="task.png" alt="System Overview" />
<p>Figure: Overview of the Mispronunciation Detection Workflow</p>
</div>
<h3>1. Read the Verse</h3>
<p>
System shows a <strong>Reference Verse</strong> plus its <strong>Reference Phoneme Sequence</strong>.
</p>
<p><strong>Example:</strong></p>
<ul>
<li><strong>Arabic:</strong> إِنَّ الصَّفَا وَالْمَرْوَةَ مِنْ شَعَائِرِ اللَّهِ</li>
<li>
<strong>Phoneme:</strong>
<code>&lt; i n n a SS A f aa w a l m a r w a t a m i n $ a E a a &lt; i r i l l a h i</code>
</li>
</ul>
<h3>2. Save Recording</h3>
<p>
User recites; system captures and stores the audio waveform.
</p>
<h3>3. Mispronunciation Detection</h3>
<p>
Model predicts the phoneme sequence—deviations from reference indicate mispronunciations.
</p>
<p><strong>Example of Mispronunciation:</strong></p>
<ul>
<li><strong>Reference:</strong> <code>&lt; i n n a SS A f aa w a l m a r w a t a m i n $ a E a a &lt; i r i l l a h i</code></li>
<li><strong>Predicted:</strong> <code>&lt; i n n a SS A f aa w a l m a r w a t a m i n s a E a a &lt; i r u l l a h i</code></li>
<li>
<strong>Annotated:</strong>
<code>&lt; i n n a SS A f aa w a l m a r w <span class="highlight">s</span> a E a a &lt; i <span class="highlight">r u</span> l l a h i</code>
</li>
</ul>
<p>
Here, <code>$</code><code>s</code> and <code>i</code><code>u</code>; omission of <code>ta</code> went undetected.
</p>
<h2>Training Dataset: Description</h2>
<p>
Hosted on Hugging Face:
</p>
<ul>
<li>
<strong>Training:</strong> 79 hours of MSA speech augmented with Qur’anic recitations
<code>load_dataset("IqraEval/Iqra_train", split="train")</code>
</li>
<li>
<strong>Development:</strong> 3.4 hours as dev set
<code>load_dataset("IqraEval/Iqra_train", split="dev")</code>
</li>
</ul>
<p>
<strong>Columns:</strong>
<ul>
<li><code>audio</code>: waveform</li>
<li><code>sentence</code>: original text (verse)</li>
<li><code>index</code>: verse ID</li>
<li><code>tashkeel_sentence</code>: fully diacritized text (verse)</li>
<li><code>phoneme</code>: phoneme sequence (using phonetizer)</li>
</ul>
</p>
<h2>Training Dataset: TTS Data (Optional)</h2>
<p>
Auxiliary high-quality TTS corpus for augmentation:
<code>load_dataset("IqraEval/Iqra_TTS")</code>
</p>
<h2>Test Dataset: QuranMB.v2</h2>
<p>
98 verses × 18 speakers ≈ 2 h, with deliberate errors and human annotations.
<code>load_dataset("IqraEval/Iqra_QuranMB_v2")</code>
</p>
<h2>Resources & Links</h2>
<ul>
<li><a href="https://github.com/Iqra-Eval/MSA_phonetiser" target="_blank">Phonetiser script (GitHub)</a></li>
<li><a href="https://huggingface.co/datasets/IqraEval/Iqra_train" target="_blank">Training & Dev Data (Hugging Face)</a></li>
<li><a href="https://huggingface.co/datasets/IqraEval/Iqra_TTS" target="_blank">TTS Data (Hugging Face)</a></li>
<li><a href="https://github.com/Iqra-Eval/interspeech_IqraEval" target="_blank">Baseline Systems & Scripts (GitHub)</a></li>
</ul>
<h2>Submission Details (Draft)</h2>
<p>
Submit a UTF-8 CSV named <code>teamID_submission.csv</code> with two columns:
</p>
<ul>
<li><strong>ID:</strong> audio filename (no extension)</li>
<li><strong>Labels:</strong> predicted phoneme sequence (space-separated)</li>
</ul>
<pre>
ID,Labels
0000_0001, i n n a m a a y a …
0000_0002, m a a n a n s a …
</pre>
<p>
<strong>Note:</strong> no extra spaces, single CSV, no archives.
</p>
<h2>Evaluation Criteria</h2>
<p>
IqraEval Leaderboard is based on phoneme-level <strong>F1-score</strong>.
We use a hierarchical evaluation (detection + diagnostic) per <a href="https://arxiv.org/pdf/2310.13974" target="_blank">MDD Overview</a>.
</p>
<ul>
<li><em><strong>What is said</strong></em>: annotated phoneme sequence</li>
<li><em><strong>What is predicted</strong></em>: model output</li>
<li><em><strong>What should have been said</strong></em>: reference sequence</li>
</ul>
<p>From these we compute:</p>
<ul>
<li><strong>TA:</strong> correct phonemes accepted</li>
<li><strong>TR:</strong> mispronunciations correctly detected</li>
<li><strong>FR:</strong> correct phonemes flagged as errors</li>
<li><strong>FA:</strong> mispronunciations missed</li>
</ul>
<p>Rates:</p>
<ul>
<li><strong>FRR:</strong> FR/(TA+FR)</li>
<li><strong>FAR:</strong> FA/(FA+TR)</li>
<li><strong>DER:</strong> DE/(CD+DE)</li>
</ul>
<p>
Plus standard Precision, Recall, F1 for detection:
<ul>
<li>Precision = TR/(TR+FR)</li>
<li>Recall = TR/(TR+FA)</li>
<li>F1 = 2·P·R/(P+R)</li>
</ul>
</p>
<h2>Suggested Research Directions</h2>
<ol>
<li>
<strong>Advanced Mispronunciation Detection Models</strong><br>
Apply state-of-the-art self-supervised models (e.g., Wav2Vec2.0, HuBERT), using variants that are pre-trained/fine-tuned on Arabic speech. These models can then be fine-tuned on Quranic recitations to improve phoneme-level accuracy.
</li>
<li>
<strong>Data Augmentation Strategies</strong><br>
Create synthetic mispronunciation examples using pipelines like
<a href="https://arxiv.org/abs/2211.00923" target="_blank">SpeechBlender</a>.
Augmenting limited Arabic/Quranic speech data helps mitigate data scarcity and improves model robustness.
</li>
<li>
<strong>Analysis of Common Mispronunciation Patterns</strong><br>
Perform statistical analysis on the QuranMB dataset to identify prevalent errors (e.g., substituting similar phonemes, swapping vowels).
These insights can drive targeted training and tailored feedback rules.
</li>
</ol>
<!-- <h2>Suggested Research Directions</h2>
<ol>
<li><strong>Advanced Models:</strong> fine-tune Wav2Vec2.0, HuBERT on Arabic/Quranic speech.</li>
<li><strong>Data Augmentation:</strong> use SpeechBlender to synthesize mispronunciations.</li>
<li><strong>Pattern Analysis:</strong> statistical study of QuranMB errors to guide training.</li>
</ol> -->
<h2>Registration</h2>
<p>
Teams and individual participants must register to gain access to the test set. Please complete the registration form using the link below:
</p>
<p>
<a href="https://docs.google.com/forms/d/e/1FAIpQLSf8qVKV1C9JVY7gUloQRLX8iMBUaZNFtYHBcqG6obJU0JauGw/viewform" target="_blank">Registration Form</a>
</p>
<p>
Registration opens on June 10, 2025.
</p>
<!-- <h2>Registration</h2>
<p>
Teams and individual participants must register to gain access to the test set. Please complete the registration form using the link below:
</p>
<p>
<a href="https://docs.google.com/forms/d/e/1FAIpQLSf8qVKV1C9JVY7gUloQRLX8iMBUaZNFtYHBcqG6obJU0JauGw/viewform" target="_blank">Registration Form</a>
</p>
<p>
Registration opens on June 10, 2025.
</p> -->
<h2>Future Updates</h2>
<p>
Further details on the open-set leaderboard submission will be posted on the shared task website (June 20, 2025). Stay tuned!
</p>
<h2>Contact and Support</h2>
<p>
For inquiries and support, reach out to the task coordinators at
<a href="mailto:iqraeval@googlegroups.com">iqraeval@googlegroups.com</a>.
</p>
<h2>References</h2>
<ul>
<li>El Kheir Y. et al., “SpeechBlender: Speech Augmentation Framework for Mispronunciation Data Generation,” arXiv:2211.00923, 2022.</li>
<li>Al Harere A. & Al Jallad K., “Mispronunciation Detection of Basic Quranic Recitation Rules using Deep Learning,” arXiv:2305.06429, 2023.</li>
<li>Aly S. A. et al., “ASMDD: Arabic Speech Mispronunciation Detection Dataset,” arXiv:2111.01136, 2021.</li>
<li>Moustafa A. & Aly S. A., “Efficient Voice Identification Using Wav2Vec2.0 and HuBERT…,” arXiv:2111.06331, 2021.</li>
<li>El Kheir Y. et al., “Automatic Pronunciation Assessment – A Review,” arXiv:2310.13974, 2021.</li>
</ul>
</div>
</body>
</html>