GigaMIDI / README.md
keonjumavericklee's picture
Update README.md
82e06ca verified
metadata
annotations_creators: []
license: cc-by-nc-4.0
pretty_name: GigaMIDI
size_categories:
  - 1M<n<10M
source_datasets:
  - original
tags: []
extra_gated_prompt: >-
  You agree to use the GigaMIDI dataset only for non-commercial research or
  education without infringing copyright laws  or causing harm to the creative
  rights of artists, creators, or musicians.
extra_gated_fields:
  Institution (or independent): text
  Country: country
  Legal Full name: text
  Institutional email (or independent): text
  Specific date: date_picker
  I want to use this dataset for:
    type: select
    options:
      - Research
      - Education
      - Arts/Music
      - label: Other
        value: other
  The GigaMIDI dataset has been collected, utilized, and distributed under the Fair Dealing provisions for research and private study outlined in the Canadian Copyright Act: checkbox
  I agree to use this dataset for research use under fair dealing ONLY: checkbox
  If you use GigaMIDI dataset in your research, please acknowledge by citing our reference paper (see the Reference/Citation section below) to support knowledge sharing and advance the field: checkbox
task_ids: []
configs:
  - config_name: v1.0.0
    default: true
    data_files:
      - split: train
        path: v1.0.0/train.parquet
      - split: validation
        path: v1.0.0/validation.parquet
      - split: test
        path: v1.0.0/test.parquet
  - config_name: v1.1.0
    data_files:
      - split: train
        path: v1.1.0/train/*.parquet
      - split: validation
        path: v1.1.0/validation/*.parquet
      - split: test
        path: v1.1.0/test/*.parquet

Dataset Card for GigaMIDI

Dataset Logo

Table of Contents

The Extended GigaMIDI Dataset Summary

We present the extended GigaMIDI dataset [https://huggingface.co/datasets/Metacreation/GigaMIDI/viewer/v1.1.0], a large-scale symbolic music collection comprising over 2.1 million unique MIDI files with detailed annotations for music loop detection. Expanding on its predecessor, this release introduces a novel expressive loop detection method that captures performance nuances such as microtiming and dynamic variation, essential for advanced generative music modelling. Our method extends previous approaches, which were limited to strictly quantized, non-expressive tracks, by employing the Note Onset Median Metric Level (NOMML) heuristic to distinguish expressive from non-expressive material. This enables robust loop detection across a broader spectrum of MIDI data. Our loop detection method reveals more than 9.2 million non-expressive loops spanning all General MIDI instruments, alongside 2.3 million expressive loops identified through our new method. As the largest resource of its kind, the extended GigaMIDI dataset provides a strong foundation for developing models that synthesize structurally coherent and expressively rich musical loops. As a use case, we leverage this dataset to train an expressive multitrack symbolic music loop generation model using the MIDI-GPT system, resulting in the creation of a synthetic loop dataset.

Dataset files

All MIDI files and metadata for the extended GigaMIDI dataset are distributed in Parquet format on the Hugging Face Hub for seamless loading with the ๐Ÿค— Datasets library. For convenience, you can also download the original metadata CSV and the full MIDI archive directly:

Dataset Description

Dataset Curators

Main curator: Keon Ju Maverick Lee

Assistance: Jeff Ens, Sara Adkins, Nathan Fradet, Pedro Sarmento, Philippe Pasquier, Mathieu Barthet, Phillip Long, Paul Triana

Note: The GigaMIDI dataset is designed for continuous growth, with new subsets added and updated over time to ensure its ongoing expansion and relevance.

Licensing Information

The dataset is distributed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. This license permits users to share, adapt, and utilize the dataset exclusively for non-commercial purposes, including research and educational applications, provided that proper attribution is given to the original creators. By adhering to the terms of CC BY-NC 4.0, users ensure the dataset's responsible use while fostering its accessibility for academic and non-commercial endeavors.

Citation/Reference

You agree to use the GigaMIDI dataset only for non-commercial research or education without infringing copyright laws or causing harm to the creative rights of artists, creators, or musicians.

Currently, the extended GigaMIDI dataset is being under review at NeurIPS 2025 Dataset Track for Creative AI.

If you use the GigaMIDI dataset or any part of this project, please cite the following paper: https://transactions.ismir.net/articles/10.5334/tismir.203

@article{lee2025gigamidi,
  title={The GigaMIDI Dataset with Features for Expressive Music Performance Detection},
  author={Lee, Keon Ju Maverick and Ens, Jeff and Adkins, Sara and Sarmento, Pedro and Barthet, Mathieu and Pasquier, Philippe},
  journal={Transactions of the International Society for Music Information Retrieval (TISMIR)},
  volume={8},
  number={1},
  pages={1--19},
  year={2025}
}

Here's a Hugging Face Hub README.md dataset summary based on the provided PDF:


๐Ÿ“‚ Dataset Summary: Extended GigaMIDI Dataset

The Extended GigaMIDI Dataset is a large-scale symbolic music collection containing over 2.1 million unique MIDI files, with detailed annotations for music loop detection and expressive performance characteristics. It is the largest symbolic music dataset to date and provides a rich foundation for AI music generation, expressive performance modeling, and loop-based symbolic music tasks.

๐Ÿ“Œ Key Features

  • 2.1M+ unique MIDI files covering the full General MIDI (GM) instrument set.
  • 9.2M non-expressive loops and 2.3M expressive loops detected using custom methods.
  • Includes expressive-level annotations using the NOMML heuristic.
  • Loops range from 4 to 32 beats, suitable for training loop-based generative models.
  • Multitrack expressive loop generation supported with fine-grained expressive control (velocity, microtiming).
  • Basis for the GigaMIDI-Loop synthetic dataset generated via a fine-tuned MIDI-GPT model.

๐Ÿง  Use Case

We demonstrate the dataset's utility by training a Transformer-based model (MIDI-GPT) for expressive symbolic music loop generation. This synthetic dataset, GigaMIDI-Loop, consists of 1.3M loops generated with control over expressiveness (NOMML level) and instrument groups, available separately.

๐Ÿ› ๏ธ Technical Details

  • Expressive performance is annotated using the Note Onset Median Metric Level (NOMML).

  • Loop detection methods include:

    • Correlation-based matching for non-expressive loops.
    • Soft-count similarity (Jaccard pitch overlap, velocity similarity, microtiming) for expressive loops.
  • Tracks classified as expressive or non-expressive to support targeted loop detection.

  • Dataset split: Train (80%), Validation (10%), Test (10%).

๐Ÿ” Dataset Stats

  • Files: 2,136,218
  • Tracks: 6,891,738
  • Beats: 153,947,183
  • Instrument Coverage: All 128 GM melodic + 47 percussion programs (175 total)
  • Expressiveness: ~28.6% expressive, ~71.4% non-expressive tracks
  • Top instruments in expressive loops: Piano, Drums, Guitar

Dataset Structure

Data Instances

A typical data sample comprises the split, the MD5 hash of the MIDI file in md5, and the raw MIDI bytes in music, which can be loaded with an external package such as Symusic:

{
    'split': 'train',
    'md5': '0211bbf6adf0cf10d42117e5929929a4',
    'music': b"MThd\x00\x00\x00\x06\x00\x01\x00\x05\x01\x00MTrk\x00...",  # shortened
    'NOMML': [0, 12, 0, 0],
    'num_tracks': 4,
    'TPQN': 480,
    'total_notes': 1032,
    'avg_note_duration': 0.46,
    'avg_velocity': 63.2,
    'min_velocity': 30,
    'max_velocity': 98,
    'tempo': '120.0',
    'loop_track_idx': [0, 2],
    'loop_instrument_type': ['piano', 'drums'],
    'loop_start': [0, 1920],
    'loop_end': [1920, 3840],
    'loop_duration_beats': [4.0, 4.0],
    'loop_note_density': [2.5, 3.1],
    'Type': '0',
    'instrument_category__drums-only__0__all-instruments-with-drums__1_no-drums__2': 2,
    'music_styles_curated': ['classical'],
    'music_style_scraped': 'classical',
    'music_style_audio_text_Discogs': ['classical---baroque'],
    'music_style_audio_text_Lastfm': [],
    'music_style_audio_text_Tagtraum': [],
    'title': 'Contrapunctus 1 from Art of Fugue',
    'artist': 'Bach, Johann Sebastian',
    'audio_text_matches_score': 0.83,
    'audio_text_matches_sid': ['065TU5v0uWSQmnTlP5Cnsz'],
    'audio_text_matches_mbid': ['43d521a9-54b0-416a-b15e-08ad54982e63'],
    'MIDI_program_number__expressive_': [0],
    'instrument_group__expressive_': ['piano'],
    'start_tick__expressive_': [0],
    'end_tick__expressive_': [1920],
    'duration_beats__expressive_': [4.0],
    'note_density__expressive_': [2.5],
    'loopability__expressive_': [0.8],
}

Data Fields

The GigaMIDI metadata schema defines the following fields for each entry:

  • split (string): The data split (train, validation, or test).
  • md5 (string): MD5 hash of the MIDI file, corresponding to its file name.
  • music (bytes): Raw MIDI bytes to be loaded with an external library (e.g., Symusic).
  • NOMML (List[int]): Note Onset Median Metric Level for each track.
  • num_tracks (int): Number of tracks in the MIDI file.
  • TPQN (int): Ticks per quarter note.
  • total_notes (int): Total number of note events.
  • avg_note_duration (float): Average note duration (in ticks).
  • avg_velocity (float): Average MIDI velocity.
  • min_velocity (int): Minimum velocity value.
  • max_velocity (int): Maximum velocity value.
  • tempo (string): Tempo metadata from the MIDI file.
  • loop_track_idx (List[int]): Indices of tracks where loops were detected.
  • loop_instrument_type (List[string]): Instrument types for each detected loop.
  • loop_start (List[int]): Start tick of each loop.
  • loop_end (List[int]): End tick of each loop.
  • loop_duration_beats (List[float]): Duration of each loop in beats.
  • loop_note_density (List[float]): Note density (notes per beat) within each loop.
  • Type (string): Type indicator for the MIDI file.
  • instrument_category__drums-only__0__all-instruments-with-drums__1_no-drums__2 (int): Instrument-category code (0 = drums-only, 1 = all-instruments-with-drums, 2 = no-drums).
  • music_styles_curated (List[string]): Curated music style labels.
  • music_style_scraped (string): Music style scraped from external sources.
  • music_style_audio_text_Discogs (List[string]): Styles from Discogs audio-text matching.
  • music_style_audio_text_Lastfm (List[string]): Styles from Last.fm matching.
  • music_style_audio_text_Tagtraum (List[string]): Styles from Tagtraum matching.
  • title (string): Track title.
  • artist (string): Artist name.
  • audio_text_matches_score (float): Audio-text matching score.
  • audio_text_matches_sid (List[string]): Matched Spotify IDs.
  • audio_text_matches_mbid (List[string]): Matched MusicBrainz IDs.
  • MIDI_program_number__expressive_ (List[int]): Program numbers for expressive loops.
  • instrument_group__expressive_ (List[string]): Instrument groups for expressive loops.
  • start_tick__expressive_ (List[int]): Start ticks for expressive loops.
  • end_tick__expressive_ (List[int]): End ticks for expressive loops.
  • duration_beats__expressive_ (List[float]): Durations (in beats) of expressive loops.
  • note_density__expressive_ (List[float]): Note density of expressive loops.
  • loopability__expressive_ (List[float]): Loopability scores for expressive loops.

How to use

The datasets library allows you to load and pre-process your dataset in pure Python at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.

from datasets import load_dataset

dataset = load_dataset("Metacreation/GigaMIDI")

Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.

from datasets import load_dataset

dataset = load_dataset("Metacreation/GigaMIDI", split="train", streaming=True)

data_sample = next(iter(dataset))

Bonus: create a PyTorch dataloader directly with your own datasets (local/streamed).

Local

from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler

dataset = load_dataset("Metacreation/GigaMIDI", split="train")
batch_sampler = BatchSampler(RandomSampler(dataset), batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)

Streaming

from datasets import load_dataset
from torch.utils.data import DataLoader

dataset = load_dataset("Metacreation/GigaMIDI", split="train")
dataloader = DataLoader(dataset, batch_size=32)

Example scripts

MIDI files can be easily loaded and tokenized with Symusic and MidiTok respectively.

from datasets import load_dataset
from miditok import REMI
from symusic import Score

dataset = load_dataset("Metacreation/GigaMIDI", split="train")
tokenizer = REMI()
for sample in dataset:
    score = Score.from_midi(sample["music"])
    tokens = tokenizer(score)

The dataset can be processed by using the dataset.map and dataset.filter methods.

from pathlib import Path
from datasets import load_dataset
from miditok.constants import SCORE_LOADING_EXCEPTION
from miditok.utils import get_bars_ticks
from symusic import Score

def is_score_valid(
    score: Score | Path | bytes, min_num_bars: int, min_num_notes: int
) -> bool:
    """
    Check if a ``symusic.Score`` is valid, contains the minimum required number of bars.

    :param score: ``symusic.Score`` to inspect or path to a MIDI file.
    :param min_num_bars: minimum number of bars the score should contain.
    :param min_num_notes: minimum number of notes that score should contain.
    :return: boolean indicating if ``score`` is valid.
    """
    if isinstance(score, Path):
        try:
            score = Score(score)
        except SCORE_LOADING_EXCEPTION:
            return False
    elif isinstance(score, bytes):
        try:
            score = Score.from_midi(score)
        except SCORE_LOADING_EXCEPTION:
            return False

    return (
        len(get_bars_ticks(score)) >= min_num_bars and score.note_num() > min_num_notes
    )

dataset = load_dataset("Metacreation/GigaMIDI", split="train")
dataset = dataset.filter(
    lambda ex: is_score_valid(ex["music"], min_num_bars=8, min_num_notes=50)
)

Export MIDI files

The GigaMIDI dataset is provided in parquet format for ease of use with the Hugging Face datasets library. If you wish to use the "raw" MIDI files, you can simply iterate over the dataset as shown in the examples above and write the music entry of each sample on your local filesystem as a MIDI file.

Dataset Creation

Curation Rationale

The GigaMIDI dataset was curated through a meticulous process to ensure a high-quality collection of MIDI files for research, particularly in expressive music performance detection. Freely available MIDI files were aggregated from platforms like Zenodo and GitHub through web scraping, with all subsets documented and deduplicated using MD5 checksums. The dataset was standardized to adhere to the General MIDI specification, including remapping non-GM drum tracks and correcting MIDI channel assignments. Manual curation was performed to define ground-truth categories for expressive and non-expressive performances, enabling robust analysis.

Source Data

Data Source Links

The GigaMIDI dataset incorporates MIDI files aggregated from various publicly available sources. Detailed information and source links for each subset are provided in the accompanying PDF file:

Data Source Links for the GigaMIDI Dataset

This document includes source links for all publicly available MIDI files included in the dataset.

Please refer to the PDF for comprehensive details about the origins and organization of the dataset's contents.

Annotations

Annotation process

To classify tracks based on dynamic and timing variations, novel heuristics were developed, such as the Distinctive Note Velocity Ratio (DNVR), Distinctive Note Onset Deviation Ratio (DNODR), and Note Onset Median Metric Level (NOMML). Musical styles were annotated using the Musicmap style topology, with manual validation to ensure accuracy. The dataset, hosted on the Hugging Face Hub for enhanced accessibility, supports integration with tools like Symusic and MidiTok. With over 1.4 million unique MIDI files and 7.1 million loops, GigaMIDI offers an extensive resource for Music Information Retrieval (MIR) and computational musicology research.

More details are available via our GitHub webpage: https://github.com/Metacreation-Lab/GigaMIDI-Dataset

Limitations

In navigating the use of MIDI datasets for research and creative explorations, it is imperative to consider the ethical implications inherent in dataset bias. Bias in MIDI datasets often mirrors prevailing practices in Western digital music production, where certain instruments, particularly the piano and drums, dominate. This predominance is largely influenced by the widespread availability and use of MIDI-compatible instruments and controllers for these instruments. The piano is a primary compositional tool and a ubiquitous MIDI controller and keyboard, facilitating input for a wide range of virtual instruments and synthesizers. Similarly, drums, whether through drum machines or MIDI drum pads, enjoy widespread use for rhythm programming and beat production. This prevalence arises from their intuitive interface and versatility within digital audio workstations. This may explain why the distribution of MIDI instruments in MIDI datasets is often skewed toward piano and drums, with limited representation of other instruments, particularly those requiring more nuanced interpretation or less commonly played via MIDI controllers or instruments.

Moreover, the MIDI standard, while effective for encoding basic musical information, is limited in representing the complexities of Western music's time signatures and meters. It lacks an inherent framework to encode hierarchical metric structures, such as strong and weak beats, and struggles with the dynamic flexibility of metric changes. Additionally, its reliance on fixed temporal grids often oversimplifies expressive rhythmic nuances like rubato, leading to a loss of critical musical details. These constraints necessitate supplementary metadata or advanced techniques to accurately capture the temporal intricacies of Western music.

Furthermore, a constraint emerges from the inadequate accessibility of ground truth data that clearly demarcates the differentiation between non-expressive and expressive MIDI tracks across all MIDI instruments for expressive performance detection. Presently, such data predominantly originates from piano and drum instruments in the GigaMIDI dataset.

Data Accessibility and Ethical Statements

The GigaMIDI dataset consists of MIDI files acquired via the aggregation of previously available datasets and web scraping from publicly available online sources. Each subset is accompanied by source links, copyright information when available, and acknowledgments. File names are anonymized using MD5 hash encryption. We acknowledge the work from the previous dataset papers (Goebl, 1999; Mรผller et al., 2011; Raffel, 2016; Bosch et al., 2016; Miron et al., 2016; Donahue et al., 2018; Crestel et al., 2018; Li et al., 2018; Hawthorne et al., 2019; Gillick et al., 2019; Wang et al., 2020; Foscarin et al., 2020; Callender et al., 2020; Ens and Pasquier, 2021; Hung et al., 2021; Sarmento et al., 2021; Zhang et al., 2022; Szelogowski et al., 2022; Liu et al., 2022; Ma et al., 2022; Kong et al., 2022; Hyun et al., 2022; Choi et al., 2022; Plut et al., 2022; Hu and Widmer, 2023) that we aggregate and analyze as part of the GigaMIDI subsets.

This dataset has been collected, utilized, and distributed under the Fair Dealing provisions for research and private study outlined in the Canadian Copyright Act (Government of Canada, 2024). Fair Dealing permits the limited use of copyright-protected material without the risk of infringement and without having to seek the permission of copyright owners. It is intended to provide a balance between the rights of creators and the rights of users. As per instructions of the Copyright Office of Simon Fraser University, two protective measures have been put in place that are deemed sufficient given the nature of the data (accessible online):

  1. We explicitly state that this dataset has been collected, used, and distributed under the Fair Dealing provisions for research and private study outlined in the Canadian Copyright Act.
  2. On the Hugging Face hub, we advertise that the data is available for research purposes only and collect the userโ€™s legal name and email as proof of agreement before granting access.

We thus decline any responsibility for misuse.

The FAIR (Findable, Accessible, Interoperable, Reusable) principles (Jacobsen et al., 2020) serve as a framework to ensure that data is well-managed, easily discoverable and usable for a broad range of purposes in research. These principles are particularly important in the context of data management to facilitate open science, collaboration, and reproducibility.

โ€ข Findable: Data should be easily discoverable by both humans and machines. This is typically achieved through proper metadata, traceable source links and searchable resources. Applying this to MIDI data, each subset of MIDI files collected from public domain sources is accompanied by clear and consistent metadata via our GitHub and Hugging Face hub webpages. For example, organizing the source links of each data subset, as done with the GigaMIDI dataset, ensures that each source can be easily traced and referenced, improving discoverability.

โ€ข Accessible: Once found, data should be easily retrievable using standard protocols. Accessibility does not necessarily imply open access, but it does mean that data should be available under well-defined conditions. For the GigaMIDI dataset, hosting the data on platforms like Hugging Face Hub improves accessibility, as these platforms provide efficient data retrieval mechanisms, especially for large-scale datasets. Ensuring that MIDI data is accessible for public use while respecting any applicable licenses supports wider research and analysis in music computing.

โ€ข Interoperable: Data should be structured in such a way that it can be integrated with other datasets and used by various applications. MIDI data, being a widely accepted format in music research, is inherently interoperable, especially when standardized metadata and file formats are used. By ensuring that the GigaMIDI dataset complies with widely adopted standards and supports integration with state-of-the-art libraries in symbolic music processing, such as Symusic and MidiTok, the dataset enhances its utility for music researchers and practitioners working across different platforms and systems.

โ€ข Reusable: Data should be well-documented and licensed to be reused in future research. Reusability is ensured through proper metadata, clear licenses, and documentation of provenance. In the case of GigaMIDI, aggregating all subsets from public domain sources and linking them to the original sources strengthens the reproducibility and traceability of the data. This practice allows future researchers to not only use the dataset but also verify and expand upon it by referring to the original data sources.

Developing ethical and responsible AI systems for music requires adherence to core principles of fairness, transparency, and accountability. The creation of the GigaMIDI dataset reflects a commitment to these values, emphasizing the promotion of ethical practices in data usage and accessibility. Our work aligns with prominent initiatives promoting ethical approaches to AI in music, such as AI for Music Initiatives, which advocates for principles guiding the ethical creation of music with AI, supported by the Metacreation Lab for Creative AI and the Centre for Digital Music, which provide critical guidelines for the responsible development and deployment of AI systems in music. Similarly, the Fairly Trained initiative highlights the importance of ethical standards in data curation and model training, principles that are integral to the design of the GigaMIDI dataset. These frameworks have shaped the methodologies used in this study, from dataset creation and validation to algorithmic design and system evaluation. By engaging with these initiatives, this research not only contributes to advancing AI in music but also reinforces the ethical use of data for the benefit of the broader music computing and MIR communities.

Acknowledgements

We gratefully acknowledge the support and contributions that have directly or indirectly aided this research. This work was supported in part by funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Social Sciences and Humanities Research Council of Canada (SSHRC). We also extend our gratitude to the School of Interactive Arts and Technology (SIAT) at Simon Fraser University (SFU) for providing resources and an enriching research environment. Additionally, we thank the Centre for Digital Music (C4DM) at Queen Mary University of London (QMUL) for fostering collaborative opportunities and supporting our engagement with interdisciplinary research initiatives. We also acknowledge the support of EPSRC UKRI Centre for Doctoral Training in AI and Music (Grant EP/S022694/1) and UKRI - Innovate UK (Project number 10102804).

Special thanks are extended to Dr. Cale Plut for his meticulous manual curation of musical styles and to Dr. Nathan Fradet for his invaluable assistance in developing the HuggingFace Hub website for the GigaMIDI dataset, ensuring it is accessible and user-friendly for music computing and MIR researchers. We also sincerely thank our research interns, Paul Triana and Davide Rizzotti, for their thorough proofreading of the manuscript, as well as the TISMIR reviewers who helped us improve our manuscript.

Finally, we express our heartfelt appreciation to the individuals and communities who generously shared their MIDI files for research purposes. Their contributions have been instrumental in advancing this work and fostering collaborative knowledge in the field.