Papers
arxiv:2503.23573

DASH: Detection and Assessment of Systematic Hallucinations of VLMs

Published on Mar 30
· Submitted by YanNeu on Apr 3
Authors:

Abstract

DASH, an automatic pipeline, identifies and assesses systematic hallucinations in vision-language models across a large dataset, improving model accuracy through fine-tuning with misleading images generated from the natural image manifold.

AI-generated summary

Vision-language models (VLMs) are prone to object hallucinations, where they erroneously indicate the presenceof certain objects in an image. Existing benchmarks quantify hallucinations using relatively small, labeled datasets. However, this approach is i) insufficient to assess hallucinations that arise in open-world settings, where VLMs are widely used, and ii) inadequate for detecting systematic errors in VLMs. We propose DASH (Detection and Assessment of Systematic Hallucinations), an automatic, large-scale pipeline designed to identify systematic hallucinations of VLMs on real-world images in an open-world setting. A key component is DASH-OPT for image-based retrieval, where we optimize over the ''natural image manifold'' to generate images that mislead the VLM. The output of DASH consists of clusters of real and semantically similar images for which the VLM hallucinates an object. We apply DASH to PaliGemma and two LLaVA-NeXT models across 380 object classes and, in total, find more than 19k clusters with 950k images. We study the transfer of the identified systematic hallucinations to other VLMs and show that fine-tuning PaliGemma with the model-specific images obtained with DASH mitigates object hallucinations. Code and data are available at https://YanNeu.github.io/DASH.

Community

Paper author Paper submitter

We propose DASH, a large-scale, fully automated pipeline requiring no human labeling for identifying systematic object hallucinations in VLMs.

Code and URLs for 950K images that trigger object hallucinations are available on github:
https://github.com/YanNeu/DASH

We also propose a new benchmark, DASH-B, to enable a more reliable evaluation of object hallucinations in VLMs:
https://github.com/YanNeu/DASH-B

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.23573 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.23573 in a Space README.md to link it from this page.

Collections including this paper 2