EXAONE Deep: Reasoning Enhanced Language Models
Abstract
EXAONE Deep series models, trained on a reasoning-specialized dataset, exhibit superior capabilities in math and coding benchmarks, outperforming comparable models and demonstrating competitive performance with leading open-weight models.
We present EXAONE Deep series, which exhibits superior capabilities in various reasoning tasks, including math and coding benchmarks. We train our models mainly on the reasoning-specialized dataset that incorporates long streams of thought processes. Evaluation results show that our smaller models, EXAONE Deep 2.4B and 7.8B, outperform other models of comparable size, while the largest model, EXAONE Deep 32B, demonstrates competitive performance against leading open-weight models. All EXAONE Deep models are openly available for research purposes and can be downloaded from https://huggingface.co/LGAI-EXAONE
Models citing this paper 17
Browse 17 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 2
Collections including this paper 0
No Collection including this paper