friedrichor's picture
Update README.md
76090f3 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: activitynet_captions_train.json
      - split: val1
        path: activitynet_captions_val1.json
      - split: val2
        path: activitynet_captions_val2.json
task_categories:
  - text-to-video
  - text-retrieval
  - video-classification
language:
  - en
size_categories:
  - 10K<n<100K

ActivityNet Captions contains 20K long-form videos (180s as average length) from YouTube and 100K captions. Most of the videos contain over 3 annotated events. We follow the existing works to concatenate multiple short temporal descriptions into long sentences and evaluate ‘paragraph-to-video’ retrieval on this benchmark.

We adopt the official split:

  • Train: 10,009 videos, 10,009 captions (concatenate from 37,421 short captions)
  • Test (Val1): 4,917 videos, 4,917 captions (concatenate from 17,505 short captions)
  • Val2: 4,885 videos, 4,885 captions (concatenate from 17,031 short captions)

ActivityNet Official Release: ActivityNet Download


🌟 Citation

@inproceedings{caba2015activitynet,
  title={Activitynet: A large-scale video benchmark for human activity understanding},
  author={Caba Heilbron, Fabian and Escorcia, Victor and Ghanem, Bernard and Carlos Niebles, Juan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2015}
}