Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,46 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- text-to-image
|
5 |
+
- visual-question-answering
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
---
|
9 |
+
# Data statices of M2RAG
|
10 |
+
|
11 |
+
Click the links below to view our paper and Github project.
|
12 |
+
<a href='https://arxiv.org/abs/2502.17297'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a><a href='https://github.com/NEUIR/M2RAG'><img src="https://img.shields.io/badge/Github-M2RAG-blue?logo=Github"></a>
|
13 |
+
|
14 |
+
If you find this work useful, please cite our paper and give us a shining star 🌟 in Github
|
15 |
+
|
16 |
+
```
|
17 |
+
@misc{liu2025benchmarkingretrievalaugmentedgenerationmultimodal,
|
18 |
+
title={Benchmarking Retrieval-Augmented Generation in Multi-Modal Contexts},
|
19 |
+
author={Zhenghao Liu and Xingsheng Zhu and Tianshuo Zhou and Xinyi Zhang and Xiaoyuan Yi and Yukun Yan and Yu Gu and Ge Yu and Maosong Sun},
|
20 |
+
year={2025},
|
21 |
+
eprint={2502.17297},
|
22 |
+
archivePrefix={arXiv},
|
23 |
+
primaryClass={cs.AI},
|
24 |
+
url={https://arxiv.org/abs/2502.17297},
|
25 |
+
}
|
26 |
+
```
|
27 |
+
## 🎃 Overview
|
28 |
+
|
29 |
+
The **M²RAG** benchmark evaluates Multi-modal Large Language Models (MLLMs) by using multi-modal retrieved documents to answer questions. It includes four tasks: image captioning, multi-modal QA, fact verification, and image reranking, assessing MLLMs’ ability to leverage knowledge from multi-modal contexts.
|
30 |
+
|
31 |
+
<p align="center">
|
32 |
+
<img align="middle" src="https://raw.githubusercontent.com/NEUIR/M2RAG/main/assets/m2rag.png" style="width: 600px;" alt="m2rag"/>
|
33 |
+
</p>
|
34 |
+
|
35 |
+
## 🎃 Data Storage Structure
|
36 |
+
The data storage structure of M2RAG is as follows:
|
37 |
+
```
|
38 |
+
M2RAG/
|
39 |
+
├──fact_verify/
|
40 |
+
├──image_cap/
|
41 |
+
├──image_rerank/
|
42 |
+
├──mmqa/
|
43 |
+
├──imgs.lineidx.new
|
44 |
+
└──imgs.tsv
|
45 |
+
```
|
46 |
+
❗️Note: To obtain the ```imgs.tsv```, you can follow the instructions in the [WebQA](https://github.com/WebQnA/WebQA?tab=readme-ov-file#download-data) project. Specifically, you need to first download all the data from the folder [WebQA_imgs_7z_chunks](https://drive.google.com/drive/folders/19ApkbD5w0I5sV1IeQ9EofJRyAjKnA7tb), and then run the command ``` 7z x imgs.7z.001```to unzip and merge all chunks to get the imgs.tsv.
|