Junteng commited on
Commit
0608b0b
·
verified ·
1 Parent(s): 40a15ce

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -13
README.md CHANGED
@@ -1,23 +1,16 @@
1
- # CLIP Model for Chart Understanding
2
 
3
- This repository contains the CLIP model implementation from our paper "[On the Perception Bottleneck of VLMs for Chart Understanding](https://arxiv.org/abs/2503.18435)".
4
 
5
- ## Overview
6
 
7
- This CLIP model is specifically trained to address the perception bottleneck in Vision Language Models (VLMs) when processing and understanding charts and visualizations. Our work explores and aims to improve how CLIP effect its LVLMs.
8
-
9
-
10
- ## Model Details
11
-
12
-
13
- - Model architecture: trained from openai/clip-vit-large-patch14-336
14
- - Training data: from our collected and synthetic hard negatives chart data([Vision4Chart Dataset](https://huggingface.co/datasets/Junteng/Vision4Chart))
15
- - Training method: NegCLIP Training
16
 
 
 
17
 
18
  ## Citation
19
 
20
- If you find this model useful in your research, please consider citing our paper:
21
 
22
  ```bibtex
23
  @misc{liu2025perceptionbottleneckvlmschart,
 
1
+ # Data for CLIP Training on Chart Task
2
 
3
+ This repository contains the CLIP Training data implementation from our paper "[On the Perception Bottleneck of VLMs for Chart Understanding](https://arxiv.org/abs/2503.18435)".
4
 
 
5
 
6
+ ## Data Details
 
 
 
 
 
 
 
 
7
 
8
+ -Data Source: Mainly chart tasks data like ChartQA, FigureQA, and DVQA.
9
+ -Data overview: Each data contains image, a correct caption and wrong caption.
10
 
11
  ## Citation
12
 
13
+ If you find this data useful in your research, please consider citing our paper:
14
 
15
  ```bibtex
16
  @misc{liu2025perceptionbottleneckvlmschart,