Clotho-Moment / README.md
lighthouse-emnlp2024's picture
Update README.md
d96afa8 verified
metadata
license: apache-2.0
task_categories:
  - text-to-audio
language:
  - en
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: train/*.tar
      - split: valid
        path: valid/*.tar
      - split: test
        path: test/*.tar

Clotho-Moment

This repository provides wav files used in Language-based audio moment retrieval.

Each sample includes long audio containing some audio events with the temporal and textual annotation.

Split

  • Train
    • train/train-{000..715}.tar
    • 37930 audio samples
  • Valid
    • valid/valid-{000..108}.tar
    • 5741 audio samples
  • Test
    • test/test-{000..142}.tar
    • 7569 audio samples

Using Webdataset

import torch
import webdataset as wds
from huggingface_hub import get_token
from torch.utils.data import DataLoader


hf_token = get_token()
url = "https://huggingface.co/datasets/lighthouse-emnlp2024/Clotho-Moment/resolve/main/train/train-{{001..002}}.tar"
url = f"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'"
dataset = wds.WebDataset(url, shardshuffle=None).decode(wds.torch_audio)

for sample in dataset:
    print(sample.keys())

Citation

@inproceedings{munakata2025language,
  title={Language-based Audio Moment Retrieval},
  author={Munakata, Hokuto and Nishimura, Taichi and Nakada, Shota and Komatsu, Tatsuya},
  booktitle={ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={1--5},
  year={2025},
  organization={IEEE}
}