A New Massive Multilingual Dataset for High-Performance Language Technologies
Abstract
HPLT, a massive multilingual dataset, provides high-quality monolingual and bilingual corpora for language modeling and machine translation training, offering extensive resources for underrepresented languages.
We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of ~5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.
Models citing this paper 18
Browse 18 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 35
Collections including this paper 0
No Collection including this paper