Papers
arxiv:2407.18887

Embedding And Clustering Your Data Can Improve Contrastive Pretraining

Published on Jul 26, 2024
Authors:

Abstract

Using k-means clustering to further stratify training data based on semantic clusters within sources improves NDCG@10 for BERT-based text embedding models.

AI-generated summary

Recent studies of large-scale contrastive pretraining in the text embedding domain show that using single-source minibatches, rather than mixed-source minibatches, can substantially improve overall model accuracy. In this work, we explore extending training data stratification beyond source granularity by leveraging a pretrained text embedding model and the classic k-means clustering algorithm to further split training data apart by the semantic clusters within each source. Experimentally, we observe a notable increase in NDCG@10 when pretraining a BERT-based text embedding model on query-passage pairs from the MSMARCO passage retrieval dataset. Additionally, we conceptually connect our clustering approach to both the Topic Aware Sampling (TAS) aspect of the TAS-B methodology and the nearest-neighbor-based hard-negative mining aspect of the ANCE methodology and discuss how this unified view motivates future lines of research on the organization of contrastive pretraining data.

Community

Sign up or log in to comment

Models citing this paper 7

Browse 7 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.18887 in a dataset README.md to link it from this page.

Spaces citing this paper 57

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.