Datasets:

Modalities:
Text
Video
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
zli12321 commited on
Commit
ea33acb
·
verified ·
1 Parent(s): e41764e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -3
README.md CHANGED
@@ -1,3 +1,125 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+
6
+ # VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos
7
+
8
+ [Zongxia Li*](https://zli12321.github.io/), [Xiyang Wu*](https://wuxiyang1996.github.io/), [Yubin Qin](https://www.linkedin.com/in/yubin-qin/), [Guangyao Shi](https://guangyaoshi.github.io/), [Hongyang Du](https://www.linkedin.com/in/hongyangdu/), [Dinesh Manocha](https://www.cs.umd.edu/people/dmanocha), [Tianyi Zhou](https://tianyizhou.github.io/), [Jordan Lee Boyd-Graber](https://users.umiacs.umd.edu/~ying/)
9
+
10
+ [[📖 Paper](https://arxiv.org/pdf/2503.21776)] [[🤗 Dataset](https://huggingface.co/datasets/zli12321/VideoHalluB)]
11
+
12
+
13
+
14
+ ## 👀 About VideoHallu
15
+
16
+ With the recent success of video generation models such as [Sora](https://openai.com/sora/), [Veo2](https://veo2.ai), [Kling](https://www.klingai.com/global/), the visual quality of generated videos has reached new heights—making evaluation more challenging and pushing it beyond traditional metrics like frame consistency, resolution, and realism. However, we find that MLLMs struggle to detect abnormalities in generated videos, which is crucial for developing reliable automatic video evaluation methods.
17
+
18
+ We introduce VideoHallu, a curated dataset that includes videos generated by seven video generation models and a question-answer set to test MLLM's abilities to catch generated videos' abnormalities.
19
+
20
+ We also use GRPO to train [Qwen-2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on a subset of our dataset and show improvement on generated video understanding.
21
+
22
+
23
+ ## 🔥 News
24
+ - [2025/05/02] We release our datasets in huggingface🤗.
25
+
26
+ ## 🔍 Dataset
27
+
28
+ To facilitate GRPO training, we also randomly sample 1,000 videos from [PhysBench](https://huggingface.co/datasets/WeiChow/PhysBench-train) training data to first improve model' reasoning abilities in real-world videos, then train the model on part of our synthetic videos.
29
+
30
+ Our data spans the following categories:
31
+
32
+ <img src="./images/fig1.png" style="zoom:35%;" />
33
+
34
+
35
+ ## Getting Started
36
+
37
+ ```
38
+ # Download the dataset
39
+ pip install huggingface_hub
40
+
41
+ # Download data to your local dir
42
+ huggingface-cli download zli12321/VideoHallu --repo-type dataset --local-dir ./new_video_folders --local-dir-use-symlinks False
43
+ ```
44
+
45
+
46
+ ## The Dawn of MLLMs in Synthetic Videos 🧠
47
+ ---
48
+
49
+ <!-- 🐦 Quail to Rooster -->
50
+ <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
51
+
52
+ <details open>
53
+ <summary><strong>🎬 Video:</strong> Quail Transforming into Rooster</summary>
54
+
55
+ <p><strong>Prompt (Sora):</strong> Generate a quail and a rooster celebrating New Year.</p>
56
+
57
+ <p align="center" style="margin: 0;">
58
+ <img src="images/rooster.gif" width="400"/>
59
+ <img src="images/131021746146018_.pic.jpg" width="500"/>
60
+ </p>
61
+
62
+ </details>
63
+ </div>
64
+
65
+ ---
66
+
67
+ <!-- 🪶 Feather vs. Rock -->
68
+ <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
69
+
70
+ <details open>
71
+ <summary><strong>🎬 Video:</strong> Object Falling and Law of Physics</summary>
72
+ <p><strong>Prompt (Veo2):</strong> A feather and a heavy rock are released at the same height and begin to fall to the ground on Earth.</p>
73
+ <p align="center" style="margin: 0;">
74
+ <img src="images/feather_veo2.gif" width="400"/>
75
+ <img src="images/130281746130630_.pic.jpg" width="500"/>
76
+ </p>
77
+ </details>
78
+ </div>
79
+
80
+ ---
81
+
82
+ <!-- 🍷 Wine Drinking -->
83
+ <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
84
+ <details open>
85
+ <summary><strong>🎬 Video:</strong> Object Contact Abnormalities</summary>
86
+ <p><strong>Prompt (Sora):</strong> Generate a man drinking up a cup of wine.</p>
87
+ <p align="center" style="margin: 0;">
88
+ <img src="images/man_drinking_wine.gif" width="500"/>
89
+ <img src="images/130291746131015_.pic.jpg" width="600"/>
90
+ </p>
91
+ </details>
92
+ </div>
93
+
94
+ ---
95
+
96
+ <!-- 🍉 Bullet and Watermelon -->
97
+ <div style="border: 2px solid #ddd; border-radius: 10px; padding: 16px; margin-bottom: 20px; background-color: #f9f9f9;">
98
+ <details open>
99
+ <summary><strong>🎬 Video:</strong> Breaking Process</summary>
100
+ <p><strong>Prompt (Sora):</strong> Generate the sequence showing a bullet being shot into a watermelon.</p>
101
+ <p align="center" style="margin: 0;">
102
+ <img src="images/watermelon_explode-ezgif.com-video-to-gif-converter.gif" width="400"/>
103
+ <img src="images/130301746131484_.pic.jpg" width="500"/>
104
+ </p>
105
+ </details>
106
+ </div>
107
+
108
+
109
+
110
+ ## Acknowledgements
111
+
112
+ We sincerely appreciate the contributions of the open-source community. The related projects are as follows: [R1-V](https://github.com/Deep-Agent/R1-V) , [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) , [Video-R1](https://github.com/tulerfeng/Video-R1), [Qwen-2.5-VL](https://arxiv.org/abs/2502.13923)
113
+
114
+ ## Citations
115
+
116
+ If you find our work helpful for your research, please consider citing our work.
117
+
118
+ ```
119
+ @article{feng2025video,
120
+ title={Video-R1: Reinforcing Video Reasoning in MLLMs},
121
+ author={Feng, Kaituo and Gong, Kaixiong and Li, Bohao and Guo, Zonghao and Wang, Yibing and Peng, Tianshuo and Wang, Benyou and Yue, Xiangyu},
122
+ journal={arXiv preprint arXiv:2503.21776},
123
+ year={2025}
124
+ }
125
+ ```