Papers
arxiv:2407.01449

ColPali: Efficient Document Retrieval with Vision Language Models

Published on Jun 27, 2024
Β· Submitted by manu on Jul 2, 2024
#3 Paper of the day
Authors:
,
,

Abstract

The ColPali retrieval model uses Vision Language Models to create embeddings from images of document pages, improving performance and speed in visually rich document retrieval tasks.

AI-generated summary

Documents are visually rich structures that convey information through text, as well as tables, figures, page layouts, or fonts. While modern document retrieval systems exhibit strong performance on query-to-text matching, they struggle to exploit visual cues efficiently, hindering their performance on practical document retrieval applications such as Retrieval Augmented Generation. To benchmark current systems on visually rich document retrieval, we introduce the Visual Document Retrieval Benchmark ViDoRe, composed of various page-level retrieving tasks spanning multiple domains, languages, and settings. The inherent shortcomings of modern systems motivate the introduction of a new retrieval model architecture, ColPali, which leverages the document understanding capabilities of recent Vision Language Models to produce high-quality contextualized embeddings solely from images of document pages. Combined with a late interaction matching mechanism, ColPali largely outperforms modern document retrieval pipelines while being drastically faster and end-to-end trainable.

Community

Paper author Paper submitter

All ressources (models, benchmark, demos) are in the HuggingFace organization:

https://huggingface.co/vidore

Sign up or log in to comment

Models citing this paper 78

Browse 78 models citing this paper

Datasets citing this paper 22

Browse 22 datasets citing this paper

Spaces citing this paper 75

Collections including this paper 20