Datasets:

Modalities:
Audio
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
melodySim / README.md
dorienh's picture
Update README.md
857df0f verified
metadata
license: cc-by-3.0
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: relative_path
      dtype: string
  splits:
    - name: test
      num_bytes: 13646545685.208
      num_examples: 15292
    - name: validation
      num_bytes: 22378049262.984
      num_examples: 25468
    - name: train
      num_bytes: 269257423227.302
      num_examples: 150787
  download_size: 295850537553
  dataset_size: 305282018175.494
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*

MelodySim: Measuring Melody-aware Music Similarity for Plagiarism Detection

Github | Model | Paper

The MelodySim dataset contains 1,710 valid synthesized pieces originated from Slakh2100 dataset, each containing 4 different versions (through various augmentation settings), with a total duration of 419 hours.

This dataset may help research in:

  • Music similarity learning
  • Music plagiarism detection

Dataset Details

The MelodySim dataset contains three splits: train, validation and test. Each split contains multiple tracks. Each track folder contains the same song in 4 versions ("original", "version_0", "version_1", "version_2"), all of which are synthesized from the same midi file with sf2 in different settings. Checkout MelodySim Paper for details how the different versions are augmented. Each version contains multiple 10-second chunks named with their indices.

After downloading the dataset, this dataloader may help loading the dataset.

Citation

If you find this work useful in your research, please cite:

@article{lu2025melodysim,
  title={Text2midi-InferAlign: Improving Symbolic Music Generation with Inference-Time Alignment},
  author={Tongyu Lu and Charlotta-Marlena Geist and Jan Melechovsky and Abhinaba Roy and Dorien Herremans},
  year={2025},
  journal={arXiv:2505.20979}
}