ChartLens: Fine-grained Visual Attribution in Charts
Abstract
ChartLens enhances multimodal language models with fine-grained visual attributions, improving the accuracy of chart understanding by 26-66%.
The growing capabilities of multimodal large language models (MLLMs) have advanced tasks like chart understanding. However, these models often suffer from hallucinations, where generated text sequences conflict with the provided visual data. To address this, we introduce Post-Hoc Visual Attribution for Charts, which identifies fine-grained chart elements that validate a given chart-associated response. We propose ChartLens, a novel chart attribution algorithm that uses segmentation-based techniques to identify chart objects and employs set-of-marks prompting with MLLMs for fine-grained visual attribution. Additionally, we present ChartVA-Eval, a benchmark with synthetic and real-world charts from diverse domains like finance, policy, and economics, featuring fine-grained attribution annotations. Our evaluations show that ChartLens improves fine-grained attributions by 26-66%.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ChartMind: A Comprehensive Benchmark for Complex Real-world Multimodal Chart Question Answering (2025)
- Socratic Chart: Cooperating Multiple Agents for Robust SVG Chart Understanding (2025)
- ChartQAPro: A More Diverse and Challenging Benchmark for Chart Question Answering (2025)
- ChartQA-X: Generating Explanations for Charts (2025)
- Adaptive Markup Language Generation for Contextually-Grounded Visual Document Understanding (2025)
- FinRAGBench-V: A Benchmark for Multimodal RAG with Visual Citation in the Financial Domain (2025)
- FG-CLIP: Fine-Grained Visual and Textual Alignment (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper