Iqra’Eval Shared Task

Overview

Iqra’Eval is a shared task aimed at advancing automatic assessment of Qur’anic recitation pronunciation by leveraging computational methods to detect and diagnose pronunciation errors. The focus on Qur’anic recitation provides a standardized and well-defined context for evaluating Modern Standard Arabic (MSA) pronunciation, where precise articulation is not only valued but essential for correctness according to established Tajweed rules.

Participants will develop systems capable of:

Timeline

🔊 Task Description

The Iqra'Eval task focuses on automatic pronunciation assessment in Qur’anic context. Given a spoken audio clip of a verse and its fully vowelized reference text, your system should predict the correct phoneme sequence actually spoken by the reciter.

By comparing this predicted sequence to the reference text and the gold phoneme sequence annotation, we can automatically detect pronunciation issues, such as:

This task helps diagnose and localize pronunciation errors, enabling educational feedback in applications like Qur’anic tutoring or speech evaluation tools.

Dataset Description

All data are hosted on Hugging Face. Two main splits are provided:

Column Definitions:

Data Splits:
• Training (train): 79 hours total
• Development (dev): 3.4 hours total

TTS Data (Optional Use)

We also provide a high-quality TTS corpus for auxiliary experiments (e.g., data augmentation, synthetic pronunciation error simulation). This TTS set can be loaded via:

Test Data QuranMB

To construct a reliable test set, we select 98 verses from the Qur’an, which are read aloud by 18 native Arabic speakers (14 females, 4 males), resulting in approximately 2 hours of recorded speech. The speakers were instructed to read the text in MSA at their normal tempo, disregarding Qur’anic tajweed rules, while deliberately producing the specified pronunciation errors. To ensure consistency in error production, we developed a custom recording tool that highlighted the modified text and displayed additional instructions specifying the type of error (Figure fig:recording). Before recording, speakers were required to silently read each sentence to familiarize themselves with the intended errors before reading them aloud. After recording, three linguistic annotators verified and corrected the transcriptions, and flagged all pronunciation errors for evaluation.

Resources

For detailed instructions on data access, phonetizer installation, and baseline usage, please refer to the GitHub README.

Evaluation Criteria

Systems will be scored on their ability to detect and correctly classify phoneme-level errors:

(Detailed evaluation weights and scripts will be made available on June 5, 2025.)

Submission Details (Draft)

Participants are required to submit a CSV file named submission.csv containing the predicted phoneme sequences for each audio sample. The file must have exactly two columns:

Below is a minimal example illustrating the required format:

ID,Labels
0000_0001, i n n a m a a y a k h a l l a h a m i n ʕ i b a a d i h u l ʕ u l a m
0000_0002, m a a n a n s a k h u m i n i ʕ a a y a t i n
0000_0003, y u k h i k u m u n n u ʔ a u ʔ a m a n a t a n m m i n h u
…  
        

The first column (ID) should match exactly the audio filenames (without extension). The second column (Labels) is the predicted phoneme string.

Important:

Future Updates

Further details on evaluation criteria (exact scoring weights), submission templates, and any clarifications will be posted on the shared task website when test data are released (June 5, 2025). Stay tuned!