WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects
Abstract
The extended WMT24 dataset with 55 languages and four domains evaluates the best-performing LLMs in machine translation compared to various MT providers.
As large language models (LLM) become more and more capable in languages other than English, it is important to collect benchmark datasets in order to evaluate their multilingual performance, including on tasks like machine translation (MT). In this work, we extend the WMT24 dataset to cover 55 languages by collecting new human-written references and post-edits for 46 new languages and dialects in addition to post-edits of the references in 8 out of 9 languages in the original WMT24 dataset. The dataset covers four domains: literary, news, social, and speech. We benchmark a variety of MT providers and LLMs on the collected dataset using automatic metrics and find that LLMs are the best-performing MT systems in all 55 languages. These results should be confirmed using a human-based evaluation, which we leave for future work.
Models citing this paper 165
Browse 165 models citing this paperDatasets citing this paper 2
Spaces citing this paper 282
Collections including this paper 0
No Collection including this paper