MMLA-Datasets / README.md
HanleiZhang's picture
Update README.md
c2c113c verified
metadata
license: cc-by-4.0
task_categories:
  - zero-shot-classification
  - text-classification
  - text-generation
language:
  - en
  - zh
size_categories:
  - 10K<n<100K
pretty_name: MMLA

Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark

1. Introduction

MMLA is the first comprehensive multimodal language analysis benchmark for evaluating foundation models. It has the following features:

  • Large Scale: 61K+ multimodal samples.
  • Various Sources: 9 datasets.
  • Three Modalities: text, video, and audio
  • Both Acting and Real-world Scenarios: films, TV series, YouTube, Vimeo, Bilibili, TED, improvised scripts, etc.
  • Six Core Dimensions in Multimodal Language Analysis: intent, emotion, sentiment, dialogue act, speaking style, and communication behavior.

We also build baselines with three evaluation methods (i.e., zero-shot inference, supervised fine-tuning, and instruction tuning) on 8 mainstream foundation models (i.e., 5 MLLMs (Qwen2-VL, VideoLLaMA2, LLaVA-Video, LLaVA-OV, MiniCPM-V-2.6), 3 LLMs (InternLM2.5, Qwen2, LLaMA3). More details can refer to our paper.

2. Datasets

2.1 Statistics

Dataset statistics for each dimension in the MMLA benchmark. #C, #U, #Train, #Val, and #Test represent the number of label classes, utterances, training, validation, and testing samples, respectively. avg. and max. refer to the average and maximum lengths.

Dimensions Datasets #C #U #Train #Val #Test Video Hours Source #Video Length (avg. / max.) #Text Length (avg. / max.) Language
Intent MIntRec 20 2,224 1,334 445 445 1.5 TV series 2.4 / 9.6 7.6 / 27.0 English
MIntRec2.0 30 9,304 6,165 1,106 2,033 7.5 TV series 2.9 / 19.9 8.5 / 46.0
Dialogue Act MELD 12 9,989 6,992 999 1,998 8.8 TV series 3.2 / 41.1 8.6 / 72.0 English
IEMOCAP 12 9,416 6,590 942 1,884 11.7 Improvised scripts 4.5 / 34.2 12.4 / 106.0
Emotion MELD 7 13,708 9,989 1,109 2,610 12.2 TV series 3.2 / 305.0 8.7 / 72.0 English
IEMOCAP 6 7,532 5,237 521 1,622 9.6 Improvised scripts 4.6 / 34.2 12.8 / 106.0
Sentiment MOSI 2 2,199 1,284 229 686 2.6 Youtube 4.3 / 52.5 12.5 / 114.0 English
CH-SIMS v2.0 3 4,403 2,722 647 1,034 4.3 TV series, films 3.6 / 42.7 1.8 / 7.0 Mandarin
Speaking Style UR-FUNNY-v2 2 9,586 7,612 980 994 12.9 TED 4.8 / 325.7 16.3 / 126.0 English
MUStARD 2 690 414 138 138 1.0 TV series 5.2 / 20.0 13.1 / 68.0
Communication Behavior Anno-MI (client) 3 4,713 3,123 461 1,128 10.8 YouTube & Vimeo 8.2 / 600.0 16.3 / 266.0 English
Anno-MI (therapist) 4 4,773 3,161 472 1,139 12.1 9.1 / 1316.1 17.9 / 205.0

2.2 License

This benchmark uses nine datasets, each of which is employed strictly in accordance with its official license and exclusively for academic research purposes. We fully respect the datasets’ copyright policies, license requirements, and ethical standards. For those datasets whose licenses explicitly permit redistribution, we release the original video data (e.g., MIntRec, MIntRec2.0, MELD, UR-FUNNY-v2, MUStARD, MELD-DA, CH-SIMS v2.0, and Anno-MI. For datasets that restrict video redistribution, users should obtain the videos directly from their official repositories (e.g., MOSI, IEMOCAP and IEMOCAP-DA. In compliance with all relevant licenses, we also provide the original textual data unchanged, together with the specific dataset splits used in our experiments. This approach ensures reproducibility and academic transparency while strictly adhering to copyright obligations and protecting the privacy of individuals featured in the videos.

3. LeaderBoard

3.1 Rank of Zero-shot Inference

RANK Models ACC TYPE
🥇 GPT-4o 52.60 MLLM
🥈 Qwen2-VL-72B 52.55 MLLM
🥉 LLaVA-OV-72B 52.44 MLLM
4 LLaVA-Video-72B 51.64 MLLM
5 InternLM2.5-7B 50.28 LLM
6 Qwen2-7B 48.45 LLM
7 Qwen2-VL-7B 47.12 MLLM
8 Llama3-8B 44.06 LLM
9 LLaVA-Video-7B 43.32 MLLM
10 VideoLLaMA2-7B 42.82 MLLM
11 LLaVA-OV-7B 40.65 MLLM
12 Qwen2-1.5B 40.61 LLM
13 MiniCPM-V-2.6-8B 37.03 MLLM
14 Qwen2-0.5B 22.14 LLM

3.2 Rank of Supervised Fine-tuning (SFT) and Instruction Tuning (IT)

Rank Models ACC Type
🥇 Qwen2-VL-72B (SFT) 69.18 MLLM
🥈 MiniCPM-V-2.6-8B (SFT) 68.88 MLLM
🥉 LLaVA-Video-72B (IT) 68.87 MLLM
4 LLaVA-ov-72B (SFT) 68.67 MLLM
5 Qwen2-VL-72B (IT) 68.64 MLLM
6 LLaVA-Video-72B (SFT) 68.44 MLLM
7 VideoLLaMA2-7B (SFT) 68.30 MLLM
8 Qwen2-VL-7B (SFT) 67.60 MLLM
9 LLaVA-ov-7B (SFT) 67.54 MLLM
10 LLaVA-Video-7B (SFT) 67.47 MLLM
11 Qwen2-VL-7B (IT) 67.34 MLLM
12 MiniCPM-V-2.6-8B (IT) 67.25 MLLM
13 Llama-3-8B (SFT) 66.18 LLM
14 Qwen2-7B (SFT) 66.15 LLM
15 Internlm-2.5-7B (SFT) 65.72 LLM
16 Qwen-2-7B (IT) 64.58 LLM
17 Internlm-2.5-7B (IT) 64.41 LLM
18 Llama-3-8B (IT) 64.16 LLM
19 Qwen2-1.5B (SFT) 64.00 LLM
20 Qwen2-0.5B (SFT) 62.80 LLM

4. Acknowledgements

For more details, please refer to our Github repo. If our work is helpful to your research, please consider citing the following paper:

@article{zhang2025mmla,
  author={Zhang, Hanlei and Li, Zhuohang and Zhu, Yeshuang and Xu, Hua and Wang, Peiwu and Zhu, Haige and Zhou, Jie and Zhang, Jinchao},
  title={Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark},
  year={2025},
  journal={arXiv preprint arXiv:2504.16427},
}