Datasets:

Modalities:
Text
Video
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
wuxiyang commited on
Commit
0ca6076
·
verified ·
1 Parent(s): a585717

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -8
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
 
9
  # VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos
10
 
11
- [Zongxia Li*](https://zli12321.github.io/), [Xiyang Wu*](https://wuxiyang1996.github.io/), [Yubin Qin](https://www.linkedin.com/in/yubin-qin/), [Hongyang Du](https://smashedpython.github.io/HongyangDu.github.io/), [Guangyao Shi](https://guangyaoshi.github.io/), [Dinesh Manocha](https://www.cs.umd.edu/people/dmanocha), [Tianyi Zhou](https://tianyizhou.github.io/), [Jordan Lee Boyd-Graber](https://users.umiacs.umd.edu/~ying/)
12
 
13
  [[📖 Paper](https://arxiv.org/abs/2505.01481)] [[🤗 Dataset](https://huggingface.co/datasets/IntelligenceLab/VideoHallu)] [[🌍Website](https://wuxiyang1996.github.io/videohallu_page/)]
14
 
@@ -16,11 +16,7 @@ language:
16
 
17
  ## 👀 About VideoHallu
18
 
19
- Synthetic video generation using foundation models has gained significant attention due to its realism and broad applications. However, while these models excel at generating visually coherent and high-quality video frames, they often overlook commonsense reasoning and physical law violations, leading to abnormal content. Existing score-based evaluations like [VideoScore](https://arxiv.org/abs/2406.15252) mainly focus on general video quality and do not take these abnormalities into account, and offer no explanations of the evaluation results. A more promising evaluation approach is to leverage multi-modal large language models (MLLMs) as interpretable video evaluators, following the approach of [FActScore](https://arxiv.org/abs/2305.14251). However, how well MLLMs can detect these abnormalities in synthetic videos is underexplored.
20
-
21
- Motivated by a more interpretable video generation evaluation, we introduce VideoHallu, a benchmark built from synthetic videos produced by popular models like [Sora](https://openai.com/sora/), [Veo2](https://veo2.ai), [Kling](https://www.klingai.com/global/), paired with expert-crafted question-answering pair examples easily solvable with human-level perception and reasoning across multiple categories. We evaluate several State-of-the-Art (SoTA) MLLMs with our benchmark, including [GPT-4o](https://openai.com/index/hello-gpt-4o/), [Gemini-2.5-Pro](https://deepmind.google/technologies/gemini/pro/), [Qwen-2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), and forefront models like [Video-R1](https://github.com/tulerfeng/Video-R1) and [VideoChat-R1](https://github.com/OpenGVLab/VideoChat-R1). Despite the strong performance of R1 MLLMs on real-world video benchmarks like [MVBench](https://huggingface.co/datasets/OpenGVLab/MVBench) and [MovieChat](https://github.com/rese1f/MovieChat), these models still struggle and hallucinate on basic commonsense and physics reasoning tasks in synthetic videos, highlighting synthetic video hallucination as an underexplored challenge.
22
-
23
- Moreover, we post-train current SoTA MLLMs, [Qwen-2.5-VL-7B](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), with [Group Relative Policy Optimization (GRPO)](https://arxiv.org/abs/2501.12948) using both real-world and synthetic commonsense/physics datasets. Our results show improved overall accuracy compared to the base model, achieving the highest performance among all models, highlighting the importance of integrating high-quality counterexamples to enhance commonsense and physics reasoning in MLLMs' language priors.
24
 
25
  ## 🔥 News
26
  - [2025/05/02] We expand our dataset with more QA pairs🤗.
@@ -41,7 +37,7 @@ Moreover, we post-train current SoTA MLLMs, [Qwen-2.5-VL-7B](https://huggingface
41
 
42
  ## 🔍 <a name='benchmark'></a>Benchmark
43
 
44
- We design our benchmark, VideoHallu, around four question categories aimed at probing hallucinations in synthetic video understanding, organized by the level of reasoning required from MLLMs to perform video-question answering in practice. The benchmark spans from perceptual understanding to high-level abstract reasoning.
45
  * **Alignment** checks if the model correctly identifies and understands entities using visual and textual cues.
46
  * **Spatial-temporal Consistency** examines whether the model can track entity motion across frames.
47
  * **Common Sense Reasoning** tests if the model can reason based on its knowledge.
@@ -81,7 +77,7 @@ unrar x video.part1.rar
81
 
82
  ## <a name='showcase'></a>🧠 The Dawn of MLLMs in Synthetic Videos
83
 
84
- We present selected cases from SoTA MLLM evaluations across each category. Hallucinations in model answers, common sense or physics violations in videos, and other notable cues in the video, questions, or ground truth are highlighted to assist the reader's understanding. More examples can be found in the Appendix of [our paper](https://arxiv.org/abs/2505.01481).
85
 
86
  **Note:** The legend below explains all the symbols used to represent the State-of-the-Art (SoTA) MLLMs featured in our showcases for synthetic video generation and video question-answering.
87
  <p align="center">
 
8
 
9
  # VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations for Synthetic Videos
10
 
11
+ [Zongxia Li*](https://zli12321.github.io/), [Xiyang Wu*](https://wuxiyang1996.github.io/), [Guangyao Shi](https://guangyaoshi.github.io/), [Yubin Qin](https://www.linkedin.com/in/yubin-qin/), [Hongyang Du](https://hongyang-du.github.io/), [Tianyi Zhou](https://tianyizhou.github.io/), [Dinesh Manocha](https://www.cs.umd.edu/people/dmanocha), [Jordan Lee Boyd-Graber](https://users.umiacs.umd.edu/~ying/)
12
 
13
  [[📖 Paper](https://arxiv.org/abs/2505.01481)] [[🤗 Dataset](https://huggingface.co/datasets/IntelligenceLab/VideoHallu)] [[🌍Website](https://wuxiyang1996.github.io/videohallu_page/)]
14
 
 
16
 
17
  ## 👀 About VideoHallu
18
 
19
+ Synthetic video generation has gained significant attention for its realism and broad applications, but remains prone to violations of common sense and physical laws. This highlights the need for reliable abnormality detectors that understand such principles and are robust to hallucinations. To address this, we introduce VideoHallu, a benchmark of over 3,000 video QA pairs built from synthetic videos generated by models like [Sora](https://openai.com/sora/), [Veo2](https://veo2.ai), [Kling](https://www.klingai.com/global/), paired with expert-crafted counterintuitive QA to evaluate the critical thinking abilities of Multi-modal Large Language Models (MLLMs) on abnormalities that are perceptually obvious to humans but often hallucinated due to language priors. VideoHallu evaluates MLLMs' abnormality detection abilities with examples across alignment, consistency, commonsense, and physics. We benchmark SOTA MLLMs, including [GPT-4o](https://openai.com/index/hello-gpt-4o/), [Gemini-2.5-Pro](https://deepmind.google/technologies/gemini/pro/), [Qwen-2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), and forefront models like [Video-R1](https://github.com/tulerfeng/Video-R1) and [VideoChat-R1](https://github.com/OpenGVLab/VideoChat-R1). We observe that these models perform well on many real-world benchmarks like [MVBench](https://huggingface.co/datasets/OpenGVLab/MVBench) and [MovieChat](https://github.com/rese1f/MovieChat), but still struggle with basic physics-based and commonsense reasoning in synthetic videos. We further show that post-training with Group Relative Policy Optimization (GRPO), using curriculum learning on datasets combining video QA with counterintuitive commonsense and physics reasoning over real and synthetic videos, improves MLLMs’ abnormality detection and critical thinking, demonstrating the value of targeted training for improving their understanding of commonsense and physical laws.
 
 
 
 
20
 
21
  ## 🔥 News
22
  - [2025/05/02] We expand our dataset with more QA pairs🤗.
 
37
 
38
  ## 🔍 <a name='benchmark'></a>Benchmark
39
 
40
+ We design our benchmark, VideoHallu, with four question categories to probe hallucinations in synthetic video understanding, covering perceptual understanding to abstract reasoning:
41
  * **Alignment** checks if the model correctly identifies and understands entities using visual and textual cues.
42
  * **Spatial-temporal Consistency** examines whether the model can track entity motion across frames.
43
  * **Common Sense Reasoning** tests if the model can reason based on its knowledge.
 
77
 
78
  ## <a name='showcase'></a>🧠 The Dawn of MLLMs in Synthetic Videos
79
 
80
+ We collect hallucination cases observed during SOTA MLLM evaluations on synthetic video tasks. Each example includes the generation prompt, key frames, questions, human-annotated ground truth, and hallucinated answers from GPT-4o, Qwen2.5-VL, and Gemini-2.5-Pro, with hallucinations marked in red to assist the reader's understanding. More examples can be found in the Appendix of [our paper](https://arxiv.org/abs/2505.01481).
81
 
82
  **Note:** The legend below explains all the symbols used to represent the State-of-the-Art (SoTA) MLLMs featured in our showcases for synthetic video generation and video question-answering.
83
  <p align="center">