Dataset Viewer
Auto-converted to Parquet
path
stringlengths
9
79
label
int64
0
9
start
float64
-9.5
701
end
float64
-7.17
706
subject
int64
-1
50
cam
int64
-1
8
dataset
stringclasses
9 values
Subject_1/ADL/01
4
0
0.5
1
1
GMDCSA24
Subject_1/ADL/02
4
0
0.4
1
1
GMDCSA24
Subject_1/ADL/03
4
0
0.4
1
1
GMDCSA24
Subject_1/ADL/04
6
0
1.14
1
1
GMDCSA24
Subject_1/ADL/06
4
0
0.7
1
1
GMDCSA24
Subject_1/ADL/07
9
0
7.52
1
1
GMDCSA24
Subject_1/ADL/08
0
0
6.28
1
1
GMDCSA24
Subject_1/ADL/09
9
0
4.98
1
1
GMDCSA24
Subject_1/ADL/10
9
0
3.92
1
1
GMDCSA24
Subject_1/ADL/11
0
0
1.94
1
1
GMDCSA24
Subject_1/ADL/14
4
0
0.44
1
1
GMDCSA24
Subject_1/ADL/15
0
0
1.28
1
1
GMDCSA24
Subject_1/ADL/16
4
0
3.52
1
1
GMDCSA24
Subject_1/Fall/01
4
0
2.28
1
1
GMDCSA24
Subject_1/Fall/02
4
0
1.14
1
1
GMDCSA24
Subject_1/Fall/03
4
0
0.8
1
1
GMDCSA24
Subject_1/Fall/04
4
0
0.72
1
1
GMDCSA24
Subject_1/ADL/12
9
0
2.58
1
1
GMDCSA24
Subject_1/ADL/13
8
0
0.64
1
1
GMDCSA24
Subject_1/Fall/06
0
0
2.24
1
1
GMDCSA24
Subject_1/Fall/07
0
0
1
1
1
GMDCSA24
Subject_1/Fall/08
0
0
0.76
1
1
GMDCSA24
Subject_1/Fall/09
8
0
0.64
1
1
GMDCSA24
Subject_1/Fall/10
8
0
0.72
1
1
GMDCSA24
Subject_1/Fall/11
8
0
0.52
1
1
GMDCSA24
Subject_1/Fall/12
8
0
1.66
1
1
GMDCSA24
Subject_1/Fall/14
8
0
0.02
1
1
GMDCSA24
Subject_1/Fall/15
4
0
0.26
1
1
GMDCSA24
Subject_1/Fall/16
4
0
1.52
1
1
GMDCSA24
Subject_1/Fall/05
0
0
1.76
1
1
GMDCSA24
Subject_1/Fall/13
1
0
1.92
1
1
GMDCSA24
Subject_1/Fall/14
1
0.014
2.274
1
1
GMDCSA24
Subject_1/Fall/15
1
0.26
3.62
1
1
GMDCSA24
Subject_1/ADL/02
5
0.4
2.58
1
1
GMDCSA24
Subject_1/ADL/03
5
0.4
1.9
1
1
GMDCSA24
Subject_1/ADL/14
7
0.44
1.62
1
1
GMDCSA24
Subject_1/ADL/01
5
0.5
2.8
1
1
GMDCSA24
Subject_1/Fall/11
1
0.52
2.7
1
1
GMDCSA24
Subject_1/ADL/13
0
0.64
2.52
1
1
GMDCSA24
Subject_1/Fall/09
1
0.64
2.16
1
1
GMDCSA24
Subject_1/ADL/06
5
0.7
3.1
1
1
GMDCSA24
Subject_1/Fall/04
1
0.72
3.06
1
1
GMDCSA24
Subject_1/Fall/10
1
0.72
2.34
1
1
GMDCSA24
Subject_1/Fall/08
1
0.76
2.4
1
1
GMDCSA24
Subject_1/Fall/03
1
0.8
3.28
1
1
GMDCSA24
Subject_1/ADL/05
0
1
2.08
1
1
GMDCSA24
Subject_1/Fall/07
1
1
2.74
1
1
GMDCSA24
Subject_1/ADL/04
7
1.14
3.58
1
1
GMDCSA24
Subject_1/Fall/02
1
1.14
2.78
1
1
GMDCSA24
Subject_1/ADL/15
9
1.28
3.12
1
1
GMDCSA24
Subject_1/Fall/16
1
1.52
4.5
1
1
GMDCSA24
Subject_1/ADL/14
0
1.62
4.18
1
1
GMDCSA24
Subject_1/Fall/12
1
1.66
3.78
1
1
GMDCSA24
Subject_1/Fall/05
1
1.76
3.6
1
1
GMDCSA24
Subject_1/ADL/03
6
1.9
6.7
1
1
GMDCSA24
Subject_1/Fall/13
2
1.92
4.32
1
1
GMDCSA24
Subject_1/ADL/11
3
1.94
3.9
1
1
GMDCSA24
Subject_1/ADL/05
3
2.08
3.78
1
1
GMDCSA24
Subject_1/Fall/09
2
2.16
4.48
1
1
GMDCSA24
Subject_1/Fall/06
1
2.24
4.5
1
1
GMDCSA24
Subject_1/Fall/14
2
2.274
5.694
1
1
GMDCSA24
Subject_1/Fall/01
1
2.28
4.42
1
1
GMDCSA24
Subject_1/Fall/10
2
2.34
4.48
1
1
GMDCSA24
Subject_1/Fall/08
2
2.4
4.34
1
1
GMDCSA24
Subject_1/ADL/13
3
2.52
3.86
1
1
GMDCSA24
Subject_1/ADL/02
6
2.58
6.68
1
1
GMDCSA24
Subject_1/ADL/12
7
2.58
4
1
1
GMDCSA24
Subject_1/Fall/11
2
2.7
6.08
1
1
GMDCSA24
Subject_1/Fall/07
2
2.74
5.14
1
1
GMDCSA24
Subject_1/Fall/02
2
2.78
6.38
1
1
GMDCSA24
Subject_1/ADL/01
6
2.8
8.26
1
1
GMDCSA24
Subject_1/Fall/04
2
3.06
4.98
1
1
GMDCSA24
Subject_1/ADL/06
9
3.1
12.22
1
1
GMDCSA24
Subject_1/ADL/15
3
3.12
4.74
1
1
GMDCSA24
Subject_1/Fall/03
2
3.28
5.92
1
1
GMDCSA24
Subject_1/ADL/04
4
3.58
5.84
1
1
GMDCSA24
Subject_1/Fall/05
2
3.6
5.84
1
1
GMDCSA24
Subject_1/Fall/15
2
3.62
6
1
1
GMDCSA24
Subject_1/ADL/05
9
3.78
10.54
1
1
GMDCSA24
Subject_1/Fall/12
2
3.78
6.86
1
1
GMDCSA24
Subject_1/ADL/13
4
3.86
6.3
1
1
GMDCSA24
Subject_1/ADL/11
9
3.9
9.32
1
1
GMDCSA24
Subject_1/ADL/12
9
4
5.28
1
1
GMDCSA24
Subject_1/ADL/14
8
4.18
5.86
1
1
GMDCSA24
Subject_1/Fall/01
2
4.42
6.86
1
1
GMDCSA24
Subject_1/Fall/06
2
4.5
5.68
1
1
GMDCSA24
Subject_1/Fall/16
2
4.5
6.08
1
1
GMDCSA24
Subject_1/ADL/15
4
4.74
7.16
1
1
GMDCSA24
Subject_1/ADL/12
0
5.28
7.32
1
1
GMDCSA24
Subject_2/ADL/01
4
0
5.8
2
1
GMDCSA24
Subject_2/ADL/02
4
0
11.84
2
1
GMDCSA24
Subject_2/ADL/03
0
0
0.62
2
1
GMDCSA24
Subject_2/ADL/04
9
0
6.08
2
1
GMDCSA24
Subject_2/ADL/05
4
0
1.98
2
1
GMDCSA24
Subject_2/ADL/06
7
0
0.46
2
1
GMDCSA24
Subject_2/ADL/07
9
0
8.24
2
1
GMDCSA24
Subject_2/ADL/08
9
0
11.02
2
1
GMDCSA24
Subject_2/ADL/09
7
0
0.98
2
1
GMDCSA24
Subject_2/ADL/10
7
0
0.96
2
1
GMDCSA24
Subject_2/ADL/11
8
0
3.98
2
1
GMDCSA24
End of preview. Expand in Data Studio

OmniFall: A Unified Benchmark for Staged-to-Wild Fall Detection

License: CC BY-NC-SA 4.0

This repository contains the annotation and split definitions for OmniFall, a comprehensive benchmark that unifies eight public indoor fall datasets under a consistent ten-class annotation scheme, complemented by the OOPS-Fall benchmark of genuine accidents captured in the wild.

Paper: OmniFall: A Unified Staged-to-Wild Benchmark for Human Fall Detection

Overview

Falls are the leading cause of fatal injuries among older adults worldwide. While the mechanical event of falling lasts only a fraction of a second, the critical health risk often comes from the ensuing "fallen" state—when a person remains on the ground, potentially injured and unable to call for help.

OmniFall addresses three critical limitations in current fall detection research:

  1. Unified Taxonomy: Rather than binary fall/no-fall classification, we provide a consistent ten-class scheme across datasets that distinguishes transient actions (fall, sit down, lie down, stand up) from their static outcomes (fallen, sitting, lying, standing).

  2. Combined Benchmark: We unify eight public datasets (14+ hours of video, 112 subjects, 31 camera views) into a single benchmark with standardized train/val/test splits.

  3. In-the-Wild Evaluation: We include OOPS-Fall, curated from genuine accident videos of the OOPS dataset to test generalization to real-world conditions.

Datasets

This benchmark includes annotations for the following datasets:

  1. CMDFall (7h 25m single view) - 50 subjects, 7 synchronized views
  2. UP Fall (4h 35m) - 17 subjects, 2 synchronized views
  3. Le2i (47m) - 9 subjects, 6 different rooms
  4. GMDCSA24 (21m) - 4 subjects, 3 rooms
  5. CAUCAFall (16m) - 10 subjects, 1 room
  6. EDF (13m) - 5 subjects, 2 views synchronized
  7. OCCU (14m) - 5 subjects, 2 views not synchronized
  8. MCFD (12m) - 1 subject, 8 views
  9. OOPS-Fall - Curated subset of genuine fall accidents from the OOPS dataset, strong variation in subjects and views.

Structure

The repository is organized as follows:

  • labels/ - CSV files containing frame-level annotations for each dataset as well as label2id.csv
  • splits/ - Train/validation/test splits for cross-subject (CS) and cross-view (CV) evaluation
    • splits/cs/ - Cross-subject splits, where training, validation, and test sets contain different subjects
    • splits/cv/ - Cross-view splits, where training, validation, and test sets contain different camera views

Label Format

Each label file in the labels/ directory follows this format:

path,label,start,end,subject,cam,dataset
path/to/clip,class_id,start_time,end_time,subject_id,camera_id,dataset_name

Where:

  • path: Relative path to the video, given the respective dataset root.
  • label: Class ID (0-9) corresponding to one of the ten activity classes:
    • 0: walk
    • 1: fall
    • 2: fallen
    • 3: sit_down
    • 4: sitting
    • 5: lie_down
    • 6: lying
    • 7: stand_up
    • 8: standing
    • 9: other
  • start: Start time of the segment (in seconds)
  • end: End time of the segment (in seconds)
  • subject: Subject ID
  • cam: Camera view ID
  • dataset: Name of the dataset

For OOPS-Fall, only fall segments and non-fall segments are labeled; non-falls are labels as "other", independent of the underlying content, as long as it is not a fall. Cam and subject ids in OOPS-Fall are -1.

Split Format

Split files in the splits/ directory list the video segments included in each partition. You can use the split paths to filter the label data.:

path
path/to/clip

Evaluation Protocols

We provide multiple evaluation configurations via the dataset.yaml file:

Basic Configurations

  • default: Access to all dataset labels (huggingface loads everything into the train split by default.)
  • cs: Cross-subject splits for all datasets
  • cv: Cross-view splits for all datasets

Individual Dataset Configurations

  • caucafall, cmdfall, edf, gmdcsa24, le2i, mcfd, occu, up_fall, OOPS: Access to individual datasets with their respective cross-subject splits

Multi-Dataset Evaluation Protocols

  • cs-staged: Cross-subject splits combined across all staged datasets
  • cv-staged: Cross-view splits combined across all staged datasets
  • cs-staged-wild: Train and validate on staged datasets with cross-subject splits, test on OOPS-Fall
  • cv-staged-wild: Train and validate on staged datasets with cross-view splits, test on OOPS-Fall

Examples

from datasets import load_dataset
import pandas as pd

# Load the datasets
print("Loading datasets...")

# Note: We separate segment labels and split definitions, but hugginface datasets always expects a split.
# Thats why all labels are in the train split when loaded, but we create the actual splits afterwards.
labels = load_dataset("simplexsigil2/omnifall", "labels")["train"]

cv_split = load_dataset("simplexsigil2/omnifall", "cv")
cs_split = load_dataset("simplexsigil2/omnifall", "cs")

# There are many more splits, relevant for the paper:
# - cv-staged -> Only lab datasets
# - cs-staged -> Only lab datasets
# - cv-staged-wild -> Lab datasets for train and val, only OOPS-Fall in test set
# - cs-staged-wild -> Lab datasets for train and val, only OOPS-Fall in test set

# Convert to pandas DataFrames
labels_df = pd.DataFrame(labels)
print(f"Labels dataframe shape: {labels_df.shape}")

# Process each split type (CV and CS)
for split_name, split_data in [("CV", cv_split), ("CS", cs_split)]:
    print(f"\n{split_name} Split Processing:")

    # Process each split (train, validation, test)
    for subset_name, subset in split_data.items():
        # Convert to DataFrame
        subset_df = pd.DataFrame(subset)

        # Join with labels on 'path'
        merged_df = pd.merge(subset_df, labels_df, on="path", how="left")

        # Print statistics
        print(f"  {subset_name} split: {len(subset_df)} videos, {merged_df.dropna().shape[0]} labelled segments")

        # Print examples
        if not merged_df.empty:
            print(f"\n  {subset_name.upper()} EXAMPLES:")
            random_samples = merged_df.sample(min(3, len(merged_df)))
            for i, (_, row) in enumerate(random_samples.iterrows()):
                print(f"  Example {i+1}:")
                print(f"    Path: {row['path']}")
                print(f"    Start: {row['start']}")
                print(f"    End: {row['end']}")
                print(f"    Label: {row['label']}")
                print(f"    Subject: {row['subject']}")
                print(f"    Dataset: {row['dataset']}")
                print(f"    Camera: {row['cam']}")
                print()

Citation

If you use OmniFall in your research, please cite our paper (will be updated soon) as well as all sub-dataset papers:

@misc{omnifall,
      title={OmniFall: A Unified Staged-to-Wild Benchmark for Human Fall Detection}, 
      author={David Schneider and Zdravko Marinov and Rafael Baur and Zeyun Zhong and Rodi Düger and Rainer Stiefelhagen},
      year={2025},
      eprint={2505.19889},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.19889}, 
},

@inproceedings{omnifall_cmdfall,
  title={A multi-modal multi-view dataset for human fall analysis and preliminary investigation on modality},
  author={Tran, Thanh-Hai and Le, Thi-Lan and Pham, Dinh-Tan and Hoang, Van-Nam and Khong, Van-Minh and Tran, Quoc-Toan and Nguyen, Thai-Son and Pham, Cuong},
  booktitle={2018 24th International Conference on Pattern Recognition (ICPR)},
  pages={1947--1952},
  year={2018},
  organization={IEEE}
},

@article{omnifall_up-fall,
  title={UP-fall detection dataset: A multimodal approach},
  author={Mart{\'\i}nez-Villase{\~n}or, Lourdes and Ponce, Hiram and Brieva, Jorge and Moya-Albor, Ernesto and N{\'u}{\~n}ez-Mart{\'\i}nez, Jos{\'e} and Pe{\~n}afort-Asturiano, Carlos},
  journal={Sensors},
  volume={19},
  number={9},
  pages={1988},
  year={2019},
  publisher={MDPI}
},

@article{omnifall_le2i,
  title={Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboost-based classification},
  author={Charfi, Imen and Miteran, Johel and Dubois, Julien and Atri, Mohamed and Tourki, Rached},
  journal={Journal of Electronic Imaging},
  volume={22},
  number={4},
  pages={041106--041106},
  year={2013},
  publisher={Society of Photo-Optical Instrumentation Engineers}
},

@article{omnifall_gmdcsa,
  title={GMDCSA-24: A dataset for human fall detection in videos},
  author={Alam, Ekram and Sufian, Abu and Dutta, Paramartha and Leo, Marco and Hameed, Ibrahim A},
  journal={Data in Brief},
  volume={57},
  pages={110892},
  year={2024},
  publisher={Elsevier}
},

@article{omnifall_cauca,
  title={Dataset CAUCAFall},
  author={Eraso, Jose Camilo and Mu{\~n}oz, Elena and Mu{\~n}oz, Mariela and Pinto, Jesus},
  journal={Mendeley Data},
  volume={4},
  year={2022}
},

@inproceedings{omnifall_edf_occu,
  title={Evaluating depth-based computer vision methods for fall detection under occlusions},
  author={Zhang, Zhong and Conly, Christopher and Athitsos, Vassilis},
  booktitle={International symposium on visual computing},
  pages={196--207},
  year={2014},
  organization={Springer}
},

@article{omnifall_mcfd,
  title={Multiple cameras fall dataset},
  author={Auvinet, Edouard and Rougier, Caroline and Meunier, Jean and St-Arnaud, Alain and Rousseau, Jacqueline},
  journal={DIRO-Universit{\'e} de Montr{\'e}al, Tech. Rep},
  volume={1350},
  pages={24},
  year={2010}
},

@inproceedings{omnifall_oops,
  title={Oops! predicting unintentional action in video},
  author={Epstein, Dave and Chen, Boyuan and Vondrick, Carl},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={919--929},
  year={2020}
}

License

The annotations and split definitions in this repository are released under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

The original video data belongs to their respective owners and should be obtained from the original sources.

Contact

For questions about the dataset, please contact [david.schneider@kit.edu].

Downloads last month
340