Papers
arxiv:2502.12835

Subword models struggle with word learning, but surprisal hides it

Published on Feb 18
Authors:

Abstract

Character language models outperform subword language models in word learning and separate word learning from syntactic learning, suggesting they are better suited for modeling language acquisition.

AI-generated summary

We study word learning in subword and character language models with the psycholinguistic lexical decision task. While subword LMs struggle to discern words and non-words with high accuracy, character LMs solve this task easily and consistently. Furthermore, when comparing word learning and syntactic learning, both processes are separable in character LM where word learning predates syntactic learning, whereas these processes are simultaneous in subword LM. This raises questions about the adequacy of subword LMs for modeling language acquisition and positions character LMs as a viable alternative.

Community

Sign up or log in to comment

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.12835 in a Space README.md to link it from this page.

Collections including this paper 1