M2AD / README.md
ChengYuQi99's picture
Update README.md
05abc5d verified
metadata
license: apache-2.0

Visual Anomaly Detection under Complex View-Illumination Interplay: A Large-Scale Benchmark

🌐 Hugging Face Dataset

πŸ“š Paper β€’ 🏠 Homepage
by Yunkang Cao*, Yuqi Cheng*, Xiaohao Xu, Yiheng Zhang, Yihan Sun, Yuxiang Tan, Yuxin Zhang, Weiming Shen,

πŸš€ Updates

We're committed to open science! Here's our progress:

  • 2025/05/19: πŸ“„ Paper released on ArXiv.
  • 2025/05/16: 🌐 Dataset homepage launched.
  • 2025/05/24: πŸ§ͺ Code release for benchmark evaluation! code

πŸ“Š Introduction

Visual Anomaly Detection (VAD) systems often fail in the real world due to sensitivity to viewpoint-illumination interplayβ€”complex interactions that distort defect visibility. Existing benchmarks overlook this challenge.

Introducing M2AD (Multi-View Multi-Illumination Anomaly Detection), a large-scale benchmark designed to rigorously test VAD robustness under these conditions:

  • 119,880 high-resolution images across 10 categories, 999 specimens, 12 views, and 10 illuminations (120 configurations).
  • Two evaluation protocols:
    • πŸ”„ M2AD-Synergy: Tests multi-configuration information fusion.
    • πŸ§ͺ M2AD-Invariant: Measures single-image robustness to view-illumination variations.
  • Key finding: SOTA VAD methods struggle significantly on M2AD, highlighting the critical need for robust solutions.