RepBERT: Contextualized Text Embeddings for First-Stage Retrieval
Abstract
RepBERT uses fixed-length contextualized embeddings for document and query representation, achieving state-of-the-art retrieval results on the MS MARCO Passage Ranking task with efficiency similar to bag-of-words methods.
Although exact term match between queries and documents is the dominant method to perform first-stage retrieval, we propose a different approach, called RepBERT, to represent documents and queries with fixed-length contextualized embeddings. The inner products of query and document embeddings are regarded as relevance scores. On MS MARCO Passage Ranking task, RepBERT achieves state-of-the-art results among all initial retrieval techniques. And its efficiency is comparable to bag-of-words methods.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper