Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,23 +1,16 @@
|
|
1 |
-
# CLIP
|
2 |
|
3 |
-
This repository contains the CLIP
|
4 |
|
5 |
-
## Overview
|
6 |
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
## Model Details
|
11 |
-
|
12 |
-
|
13 |
-
- Model architecture: trained from openai/clip-vit-large-patch14-336
|
14 |
-
- Training data: from our collected and synthetic hard negatives chart data([Vision4Chart Dataset](https://huggingface.co/datasets/Junteng/Vision4Chart))
|
15 |
-
- Training method: NegCLIP Training
|
16 |
|
|
|
|
|
17 |
|
18 |
## Citation
|
19 |
|
20 |
-
If you find this
|
21 |
|
22 |
```bibtex
|
23 |
@misc{liu2025perceptionbottleneckvlmschart,
|
|
|
1 |
+
# Data for CLIP Training on Chart Task
|
2 |
|
3 |
+
This repository contains the CLIP Training data implementation from our paper "[On the Perception Bottleneck of VLMs for Chart Understanding](https://arxiv.org/abs/2503.18435)".
|
4 |
|
|
|
5 |
|
6 |
+
## Data Details
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
+
-Data Source: Mainly chart tasks data like ChartQA, FigureQA, and DVQA.
|
9 |
+
-Data overview: Each data contains image, a correct caption and wrong caption.
|
10 |
|
11 |
## Citation
|
12 |
|
13 |
+
If you find this data useful in your research, please consider citing our paper:
|
14 |
|
15 |
```bibtex
|
16 |
@misc{liu2025perceptionbottleneckvlmschart,
|