Iqra’Eval is a shared task aimed at advancing automatic assessment of Qur’anic recitation pronunciation by leveraging computational methods to detect and diagnose pronunciation errors. The focus on Qur’anic recitation provides a standardized and well-defined context for evaluating Modern Standard Arabic (MSA) pronunciation, where precise articulation is not only valued but essential for correctness according to established Tajweed rules.
Participants will develop systems capable of:
The aim is to design a model to detect and provide detailed feedback on mispronunciations in Quranic recitations. Users read aloud vowelized Quranic verses; This model predicts the phoneme sequence uttered by the speaker, which may contain mispronunciations. Models are evaluated on the QuranMB.v2 dataset, which contains human‐annotated mispronunciations.
Figure: Overview of the Mispronunciation Detection Workflow
The user is shown a Reference Verse (What should have been said) in Arabic script along with its corresponding Reference Phoneme Sequence.
Example:
< i n n a SS A f aa w a l m a r w a t a m i n $ a E a a < i r i l l a h i
The user recites the verse aloud; the system captures and stores the audio waveform for subsequent analysis.
The stored audio is fed into a Mispronunciation Detection Model. This model predicts the phoneme sequence uttered by the speaker, which may contain mispronunciations.
Example of Mispronunciation:
< i n n a SS A f aa w a l m a r w a t a m i n $ a E a a < i r i l l a h i
< i n n a SS A f aa w a l m a r w a t a m i n s a E a a < i r u l l a h i
< i n n a SS A f aa w a l m a r w a m i n s a E a a < i r u l l a h i
In this case, the phoneme $
was mispronounced as s
, and i
was mispronounced as u
.
The annotated phoneme sequence indicates that the phoneme ta
was omitted, but the model failed to detect it.
All data are hosted on Hugging Face. Two main splits are provided:
df = load_dataset("IqraEval/Iqra_train", split="train")
df = load_dataset("IqraEval/Iqra_train", split="dev")
Column Definitions:
audio
: Speech Array.sentence
: Original sentence text (may be partially diacritized or non-diacritized).index
: If from the Quran, the verse index (0–6265, including Basmalah); otherwise -1
.tashkeel_sentence
: Fully diacritized sentence (auto-generated via a diacritization tool).phoneme
: Phoneme sequence corresponding to the diacritized sentence (Nawar Halabi phonetizer).
Data Splits:
• Training (train): 79 hours total
• Development (dev): 3.4 hours total
We also provide a high-quality TTS corpus for auxiliary experiments (e.g., data augmentation, synthetic pronunciation error simulation). This TTS set can be loaded via:
df_tts = load_dataset("IqraEval/Iqra_TTS")
To construct a reliable test set, we select 98 verses from the Qur’an, which are read aloud by 18 native Arabic speakers (14 females, 4 males), resulting in approximately 2 hours of recorded speech. The speakers were instructed to read the text in MSA at their normal tempo, disregarding Qur’anic tajweed rules, while deliberately producing the specified pronunciation errors. To ensure consistency in error production, we developed a custom recording tool that highlighted the modified text and displayed additional instructions specifying the type of error. Before recording, speakers were required to silently read each sentence to familiarize themselves with the intended errors before reading them aloud. After recording, three linguistic annotators verified and corrected the phoneme sequence and flagged all pronunciation errors for evaluation.
df_test = load_dataset("IqraEval/Iqra_QuranMB_v2")
For detailed instructions on data access, phonetizer installation, and baseline usage, please refer to the GitHub README.
The primary evaluation metric for the IqraEval system is the F1-score at the phoneme level. In addition, we adopt a hierarchical evaluation structure, MDD Overview, that breaks down performance into detection and diagnostic phases.
Hierarchical Evaluation Structure: The hierarchical mispronunciation detection process relies on three sequences:
From these counts, we derive three rates:
In addition to these hierarchical measures, we compute the standard Precision, Recall, and F-measure for mispronunciation detection:
Participants are required to submit a CSV file named submission.csv
containing the predicted phoneme sequences for each audio sample. The file must have exactly two columns:
Below is a minimal example illustrating the required format:
ID,Labels 0000_0001, i n n a m a a y a k h a l l a h a m i n ʕ i b a a d i h u l ʕ u l a m 0000_0002, m a a n a n s a k h u m i n i ʕ a a y a t i n 0000_0003, y u k h i k u m u n n u ʔ a u ʔ a m a n a t a n m m i n h u …
The first column (ID) should match exactly the audio filenames (without extension). The second column (Labels) is the predicted phoneme string.
Important:
teamID_submission.csv
.Further details on evaluation criteria (exact scoring weights), submission templates, and any clarifications will be posted on the shared task website when test data are released (June 5, 2025). Stay tuned!