File size: 9,609 Bytes
148368a
 
c3ec9ca
 
 
 
 
 
 
 
 
 
4c29f5e
 
 
 
c3ec9ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9847294
c3ec9ca
 
 
 
 
 
 
 
 
5756d9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3ec9ca
 
 
 
 
 
 
 
5756d9d
c3ec9ca
 
5756d9d
c3ec9ca
5756d9d
c3ec9ca
 
 
 
 
 
5756d9d
c3ec9ca
5756d9d
c3ec9ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5756d9d
c3ec9ca
 
9847294
 
609e630
9847294
 
 
 
 
c3ec9ca
 
 
 
 
5756d9d
c3ec9ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5756d9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3ec9ca
 
 
 
 
 
 
148368a
c3ec9ca
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
<!doctype html>
<html>
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width" />
    <title>Iqra’Eval Shared Task</title>
    <link rel="stylesheet" href="style.css" />
</head>
<body>
    <div class="card">
        <h1>Iqra’Eval Shared Task</h1>

        <div style="text-align:center; margin: 20px 0;">
          <img src="IqraEval.png" alt="" style="max-width:100%; height:auto;" />
        </div>

        <!-- Overview Section -->
        <h2>Overview</h2>
        <p>
            <strong>Iqra’Eval</strong> is a shared task aimed at advancing <strong>automatic assessment of Qur’anic recitation pronunciation</strong> by leveraging computational methods to detect and diagnose pronunciation errors. The focus on Qur’anic recitation provides a standardized and well-defined context for evaluating Modern Standard Arabic (MSA) pronunciation, where precise articulation is not only valued but essential for correctness according to established Tajweed rules.
        </p>
        <p>
            Participants will develop systems capable of:
        </p>
        <ul>
            <li>Detecting whether a segment of Qur’anic recitation contains pronunciation errors.</li>
            <li>Diagnosing the nature of the error (e.g., substitution, deletion, or insertion of phonemes).</li>
        </ul>

        <!-- Timeline Section -->
        <h2>Timeline</h2>
        <ul>
            <li><strong>June 1, 2025</strong>: Official announcement of the shared task</li>
            <li><strong>June 8, 2025</strong>: Release of training data, development set (QuranMB), phonetizer script, and baseline systems</li>
            <li><strong>July 24, 2025</strong>: Registration deadline and release of test data</li>
            <li><strong>July 27, 2025</strong>: End of evaluation cycle (test set submission closes)</li>
            <li><strong>July 30, 2025</strong>: Final results released</li>
            <li><strong>August 15, 2025</strong>: System description paper submissions due</li>
            <li><strong>August 22, 2025</strong>: Notification of acceptance</li>
            <li><strong>September 5, 2025</strong>: Camera-ready versions due</li>
        </ul>

        <!-- Task Description -->

        <h2>🔊 Task Description</h2>
        <p>
            The Iqra'Eval task focuses on <strong>automatic pronunciation assessment</strong> in Qur’anic context. 
            Given a spoken audio clip of a verse and its fully vowelized reference text, your system should predict 
            the <strong>correct phoneme sequence</strong> actually spoken by the reciter.
        </p>
        <p>
            By comparing this predicted sequence to the reference text and the gold phoneme sequence annotation, we can automatically detect pronunciation issues, such as:
        </p>
        <ul>
            <li><strong>Substitutions</strong>: e.g., saying /k/ instead of /q/</li>
            <li><strong>Insertions</strong>: adding a sound not present in the reference</li>
            <li><strong>Deletions</strong>: skipping a required phoneme</li>
        </ul>
        <p>
            This task helps diagnose and localize pronunciation errors, enabling educational feedback in applications like Qur’anic tutoring or speech evaluation tools.
        </p>

        <h2>Dataset Description</h2>
        <p>
            All data are hosted on Hugging Face. Two main splits are provided:
        </p>
        <ul>
            <li>
                <strong>Training set:</strong> 79 hours of Modern Standard Arabic (MSA) speech, augmented with multiple Qur’anic recitations.  
                <br />
                <code>df = load_dataset("IqraEval/Iqra_train", split="train")</code>
            </li>
            <li>
                <strong>Development set:</strong> 3.4 hours reserved for tuning and validation.  
                <br />
                <code>df = load_dataset("IqraEval/Iqra_train", split="dev")</code>
            </li>
        </ul>
        <p>
            <strong>Column Definitions:</strong>
        </p>
        <ul>
            <li><code>audio</code>: Speech Array.</li>
            <li><code>sentence</code>: Original sentence text (may be partially diacritized or non-diacritized).</li>
            <li><code>index</code>: If from the Quran, the verse index (0–6265, including Basmalah); otherwise <code>-1</code>.</li>
            <li><code>tashkeel_sentence</code>: Fully diacritized sentence (auto-generated via a diacritization tool).</li>
            <li><code>phoneme</code>: Phoneme sequence corresponding to the diacritized sentence (Nawar Halabi phonetizer).</li>
        </ul>
        <p>
            <strong>Data Splits:</strong>  
            <br />
            • Training (train): 79 hours total<br />
            • Development (dev): 3.4 hours total  
        </p>

        <!-- Additional TTS Data -->
        <h2>TTS Data (Optional Use)</h2>
        <p>
            We also provide a high-quality TTS corpus for auxiliary experiments (e.g., data augmentation, synthetic pronunciation error simulation). This TTS set can be loaded via:
        </p>
        <ul>
            <li><code>df_tts = load_dataset("IqraEval/Iqra_TTS")</code></li>
        </ul>

      <h2>Test Data QuranMB</h2>
        <p>
          To construct a reliable test set, we select 98 verses from the Qur’an, which are read aloud by 18 native Arabic speakers (14 females, 4 males), resulting in approximately 2 hours of recorded speech. The speakers were instructed to read the text in MSA at their normal tempo, disregarding Qur’anic tajweed rules, while deliberately producing the specified pronunciation errors. To ensure consistency in error production, we developed a custom recording tool that highlighted the modified text and displayed additional instructions specifying the type of error. Before recording, speakers were required to silently read each sentence to familiarize themselves with the intended errors before reading them aloud. After recording, three linguistic annotators verified and corrected the phoneme sequence and flagged all pronunciation errors for evaluation.
        </p>
        <ul>
          <li><code>df_test = load_dataset("IqraEval/Iqra_QuranMB_v2")</code></li>
        </ul>
      
        <!-- Resources & Links -->
        <h2>Resources</h2>
        <ul>
            <li>
                <a href="https://huggingface.co/datasets/IqraEval/Iqra_train" target="_blank">
                    Training &amp; Development Data on Hugging Face
                </a>
            </li>
            <li>
                <a href="https://huggingface.co/datasets/IqraEval/Iqra_TTS" target="_blank">
                    IqraEval TTS Data on Hugging Face
                </a>
            </li>
            <li>
                <a href="https://github.com/Iqra-Eval/interspeech_IqraEval" target="_blank">
                    Baseline systems &amp; training scripts (GitHub)
                </a>
            </li>
        </ul>
        <p>
            <em>
                For detailed instructions on data access, phonetizer installation, and baseline usage, please refer to the GitHub README.  
            </em>
        </p>

              <h2>Evaluation Criteria</h2>
        <p>
            Systems will be scored on their ability to detect and correctly classify phoneme-level errors:
        </p>
        <ul>
            <li><strong>Detection accuracy:</strong> Did the system spot that a phoneme-level error occurred in the segment?</li>
            <li><strong>Classification F1-score:</strong> Mispronunciation Detection F1-score</li>
        </ul>
        <p>
            <em>(Detailed evaluation weights and scripts will be made available on June 5, 2025.)</em>
        </p>

        <!-- Submission Details -->
        <h2>Submission Details (Draft)</h2>
        <p>
            Participants are required to submit a CSV file named <code>submission.csv</code> containing the predicted phoneme sequences for each audio sample. The file must have exactly two columns:
        </p>
        <ul>
            <li><strong>ID:</strong> Unique identifier of the audio sample.</li>
            <li><strong>Labels:</strong> The predicted phoneme sequence, with each phoneme separated by a single space.</li>
        </ul>
        <p>
            Below is a minimal example illustrating the required format:
        </p>
        <pre>
ID,Labels
0000_0001, i n n a m a a y a k h a l l a h a m i n ʕ i b a a d i h u l ʕ u l a m
0000_0002, m a a n a n s a k h u m i n i ʕ a a y a t i n
0000_0003, y u k h i k u m u n n u ʔ a u ʔ a m a n a t a n m m i n h u
…  
        </pre>
        <p>
            The first column (ID) should match exactly the audio filenames (without extension). The second column (Labels) is the predicted phoneme string. 
        </p>
        <p>
            <strong>Important:</strong>  
            <ul>
                <li>Use UTF-8 encoding.</li>
                <li>Do not include extra spaces at the start or end of each line.</li>
                <li>Submit a single CSV file (no archives). Filename must be <code>submission.csv</code>.</li>
            </ul>
        </p>
      
        <!-- Placeholder for Future Details -->
        <h2>Future Updates</h2>
        <p>
            Further details on <strong>evaluation criteria</strong> (exact scoring weights), <strong>submission templates</strong>, and any clarifications will be posted on the shared task website when test data are released (June 5, 2025). Stay tuned!
        </p>
    </div>
</body>
</html>