---
license: cc-by-nc-nd-4.0
viewer: true
dataset_info:
- config_name: VGMIDI
features:
- name: prompt
dtype: string
- name: data
dtype: string
- name: label
dtype:
class_label:
names:
'0': Q1
'1': Q2
'2': Q3
'3': Q4
splits:
- name: train
num_bytes: 6029629
num_examples: 8383
- name: test
num_bytes: 673336
num_examples: 932
download_size: 7109915
dataset_size: 6702965
- config_name: EMOPIA
features:
- name: prompt
dtype: string
- name: data
dtype: string
- name: label
dtype:
class_label:
names:
'0': Q1
'1': Q2
'2': Q3
'3': Q4
splits:
- name: train
num_bytes: 18731226
num_examples: 19332
- name: test
num_bytes: 2102303
num_examples: 2148
download_size: 21846539
dataset_size: 20833529
- config_name: Rough4Q
features:
- name: prompt
dtype: string
- name: data
dtype: string
- name: label
dtype:
class_label:
names:
'0': Q1
'1': Q2
'2': Q3
'3': Q4
splits:
- name: train
num_bytes: 133211901
num_examples: 468605
- name: test
num_bytes: 14831382
num_examples: 52068
download_size: 172425554
dataset_size: 148043283
- config_name: EMOPIA
features:
- name: prompt
dtype: string
- name: data
dtype: string
- name: label
dtype:
class_label:
names:
'0': Q1
'1': Q2
'2': Q3
'3': Q4
splits:
- name: train
num_bytes: 18731226
num_examples: 19332
- name: test
num_bytes: 2102303
num_examples: 2148
download_size: 21846539
dataset_size: 20833529
- config_name: Analysis
features:
- name: label
dtype:
class_label:
names:
'0': Q1
'1': Q2
'2': Q3
'3': Q4
- name: valence
dtype:
class_label:
names:
'0': low
'1': high
- name: arousal
dtype:
class_label:
names:
'0': low
'1': high
- name: key
dtype:
class_label:
names:
'0': "C"
'1': "C#"
'2': "D"
'3': "Eb"
'4': "E"
'5': "F"
'6': "F#"
'7': "G"
'8': "G#/Ab"
'9': "A"
'10': "Bb"
'11': "B"
- name: mode
dtype:
class_label:
names:
'0': minor
'1': major
- name: pitch
dtype: float32
- name: range
dtype: float32
- name: pitchSD
dtype: float32
- name: direction
dtype: int8
- name: tempo
dtype: float32
- name: volume
dtype: float32
splits:
- name: train
num_bytes: 77958
num_examples: 1278
download_size: 333534
dataset_size: 77958
configs:
- config_name: VGMIDI
data_files:
- split: train
path: VGMIDI/train/data-*.arrow
- split: test
path: VGMIDI/test/data-*.arrow
- config_name: EMOPIA
data_files:
- split: train
path: EMOPIA/train/data-*.arrow
- split: test
path: EMOPIA/test/data-*.arrow
- config_name: Rough4Q
data_files:
- split: train
path: Rough4Q/train/data-*.arrow
- split: test
path: Rough4Q/test/data-*.arrow
- config_name: Analysis
data_files:
- split: train
path: Analysis/train/data-*.arrow
---
# EMelodyGen
The EMelodyGen dataset comprises four subsets: Analysis, EMOPIA, VGMIDI, and Rough4Q. The EMOPIA and VGMIDI subsets are derived from MIDI files in their respective source datasets, where all melodies in V1 soundtrack have been converted to ABC notation through a data processing script. These subsets are enriched with enhanced emotional labels. The Analysis subset involves statistical analysis of the original EMOPIA and VGMIDI datasets, aimed at guiding the enhancement and automatic annotation of musical emotional data. Lastly, the Rough4Q subset is created by merging ABC notation collections from the IrishMAN-XML, EsAC, Wikifonia, Nottingham, JSBach Chorales, and CCMusic datasets. These collections are processed and augmented based on insights from the Analysis subset, followed by rough emotional labeling using the music21 library.
## Maintenance
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/monetjoe/EMelodyGen
cd EMelodyGen
```
## Usage
```python
from datasets import load_dataset
# VGMIDI (default) / EMOPIA / Rough4Q subset
ds = load_dataset("monetjoe/EMelodyGen", name="VGMIDI")
for item in ds["train"]:
print(item)
for item in ds["test"]:
print(item)
# Analysis subset
ds = load_dataset("monetjoe/EMelodyGen", name="Analysis", split="train")
for item in ds:
print(item)
```
## Analysis
### Statistical values
| Feature | Min | Max | Range | Median | Mean |
| :-----: | :---: | :----: | :----: | :----: | :----: |
| tempo | 47.85 | 184.57 | 136.72 | 117.45 | 119.38 |
| pitch | 36.0 | 89.22 | 53.22 | 60.98 | 61.38 |
| range | 2.0 | 91.0 | 89.0 | 47.0 | 47.47 |
| pitchSD | 0.64 | 24.82 | 24.18 | 12.91 | 13.09 |
| volume | 0.02 | 0.17 | 0.16 | 0.09 | 0.09 |
### Pearson correlation table
| Emo-feature | r | Correlation | p-value | Confidence |
| :------------------ | :------ | :------------ | :-------- | :-------------------- |
| valence - tempo | +0.0621 | weak positive | 2.645e-02 | p<0.05 significant |
| valence - pitch | +0.0109 | weak positive | 6.960e-01 | p>=0.05 insignificant |
| valence - range | -0.0771 | weak negative | 5.794e-03 | p<0.05 significant |
| valence - key | +0.0119 | weak positive | 6.705e-01 | p>=0.05 insignificant |
| valence - mode | +0.3880 | positive | 3.640e-47 | p<0.05 significant |
| valence - pitchSD | -0.0666 | weak negative | 1.729e-02 | p<0.05 significant |
| valence - direction | +0.0010 | weak positive | 9.709e-01 | p>=0.05 insignificant |
| valence - volume | +0.1174 | weak positive | 2.597e-05 | p<0.05 significant |
| arousal - tempo | +0.1579 | weak positive | 1.382e-08 | p<0.05 significant |
| arousal - pitch | -0.1819 | weak negative | 5.714e-11 | p<0.05 significant |
| arousal - range | +0.3276 | positive | 2.324e-33 | p<0.05 significant |
| arousal - key | +0.0030 | weak positive | 9.138e-01 | p>=0.05 insignificant |
| arousal - mode | -0.0962 | weak negative | 5.775e-04 | p<0.05 significant |
| arousal - pitchSD | +0.3511 | positive | 2.201e-38 | p<0.05 significant |
| arousal - direction | -0.0958 | weak negative | 6.013e-04 | p<0.05 significant |
| arousal - volume | +0.3800 | positive | 3.558e-45 | p<0.05 significant |
### Feature distribution
| Feature | Distribution chart |
| :-------: | :-------------------------------------------------------------------------------------------: |
| key |  |
| pitch |  |
| range |  |
| pitchSD |  |
| tempo |  |
| volume |  |
| mode |  |
| direction |  |
## Processed EMOPIA & VGMIDI
The processed EMOPIA and processed VGMIDI datasets will be used to evaluate the error-free rate of music scores generated by fine-tuning the backbone with existing emotion-labeled datasets. Therefore, it is essential to ensure that the processed data is compatible with the input format required by the pre-trained backbone.
We found that the average number of measures in the dataset used for pre-training backbone is approximately 20, and the maximum number of measures supported by the pre-trained backbone input is 32. Consequently, we converted the original EMOPIA and VGMIDI data into XML scores filtering out erroneous items and segmented them into chunks of 20 measures each. Each chunk was appended with an ending marker to prevent the model from generating endlessly in cases of repetitive melodies without seeing a terminating mark. For the ending segments of the scores, if a segment exceeded 10 measures, it was further divided; otherwise, it was combined with the previous segment. This approach ensures that the resulting score slices do not exceed 30 measures, thereby guaranteeing that all slices are within the maximum measure limit supported by backbone, with an average of approximately 20 measures.
It is noted that when converting MIDI to XML using current tools, repeat sections cannot be folded back. In fact, after converting the dataset used for pre-training backbone into MIDI and expanding all repeat sections, the average number of measures was approximately 35. However, due to the maximum measure limit supported during pre-training, repeat markers were not expanded at that stage, and since repeat markers themselves occupy only two characters, we could not use 35 measures as the slicing unit even for MIDI data.
Subsequently, we converted the segmented XML slices into ABC notation format, performed data augmentation by transposing to 15 keys, and extracted the melodic lines and control codes to produce the final processed EMOPIA and processed VGMIDI datasets. Both datasets have a consistent structure comprising three columns: the first column is the control code, the second column is ABC chars, and the third column contains the 4Q emotion labels inherited from the original dataset. The total number of samples is 21,480 for processed EMOPIA and 9,315 for processed VGMIDI, which were split into training and test sets at a 10:1 ratio. There is almost no correlation between emotion and key. Therefore, the data augmentation by transposing to 15 keys is unlikely to significantly impact the label distribution.
## Data source of Rough4Q
The Rough4Q dataset is a large-scale dataset created by automatically annotating a substantial amount of well-structured sheet music based on conclusions from correlation statistics. The data sources for this dataset, include both scores in XML series (XML / MXL / MusicXML) and ABC notation format scores. It is noted that not all datasets within the data source include chord markings. Since this paper focuses solely on melody generation, the absence of chord information is not a significant concern for the current study. After filtering out erroneous or duplicated scores and consolidating these into a unified XML format, we utilized music21 to rapidly extract features. Due to the high volume of data, we chose a few representative and computationally manageable features for approximate emotional annotation.
| Dataset | Size | Chord | Year | Paper |
| :--------------------------------------------------------------------------------------------------------------------------: | -----: | :---: | :---: | :------------------------------------------------------------------------------------------------------------- |
| [Midi-Wav Bi-directional Pop](https://ccmusic-database.github.io/database/cpop.html) | 111 | × | 2021 | [Music Data Sharing Platform for Academic Research (CCMusic)](https://zenodo.org/records/5654924) |
| [JSBach Chorales](https://dspace.mit.edu/bitstream/handle/1721.1/84963/Cuthbert_Ariza_ISMIR_2010.pdf?sequence=1&isAllowed=y) | 366 | √ | 2010 | [Chord-Conditioned Melody Harmonization With Controllable Harmonicity](https://arxiv.org/pdf/2202.08423) |
| [Nottingham](https://ifdo.ca/~seymour/nottingham/nottingham.html) | 1015 | √ | 2011 | Nottingham Database |
| [Wikifonia](http://www.synthzone.com/files/Wikifonia/) | 6394 | √ | 2018 | [Enhanced Wikifonia Leadsheet Dataset](https://zenodo.org/records/1476555) |
| [Essen](https://ifdo.ca/~seymour/runabc/esac/esacdatabase.html) | 10369 | × | 2013 | Essen Folk Song Database |
| [IrishMAN](https://huggingface.co/datasets/sander-wood/irishman) | 216281 | × | 2023 | [TunesFormer: Forming Irish Tunes with Control Codes by Bar Patching](https://ceur-ws.org/Vol-3528/paper1.pdf) |
According to the correlation statistics, valence is significantly positively correlated only with mode. Therefore, mode was selected as the feature for determining the valence dimension, with minor mode classified as low valence and major mode as high valence. For arousal, it is significantly positively correlated with pitch range, pitch SD, and RMS. Given that RMS calculation requires audio rendering, which is impractical for large-scale automatic annotation, it was excluded. Among the features pitch range and pitch SD, the correlation between arousal and pitch SD is stronger. Moreover, pitch SD not only partially reflects pitch range but also indicates the intensity of musical variation, providing a richer set of information. Therefore, we tentatively select pitch SD as the benchmark for determining the arousal dimension, classifying scores below the median as low arousal and those above the median as high arousal. This approach yields a rough Russell 4Q label based on the V/A quadrant.
This rough labeling with noise primarily serves to record the state of mode and pitch SD as emotion-related embeddings, ensuring consistency with the format of the two processed datasets EMOPIA and VGMIDI. Following this, we applied the same data processing methods as those described for the two datasets, preserving labels while segmenting the scores. Notably, the IrishMAN was also the dataset used for backbone pre-training. But it discards scores longer than 32 measures, leading to a significant loss of data. In contrast, our segmentation approach preserves these longer scores.
We discovered that the data were highly imbalanced after processing, with the quantities of Q3 and Q4 labels differing by an order of magnitude from the other categories. To address this imbalance, we performed data augmentation by transposing Q3 and Q4 categories across 15 different keys only. As a result of these processes, we ultimately obtained the Rough4Q dataset, which now comprises approximately 521K samples in total and is split into training and test sets at a 10:1 ratio.
## Statistics
| Dataset | Pie chart | Total | Train | Test |
| :------: | :------------------------------------------------------------------------------------------: | -----: | -----: | ----: |
| Analysis |  | 1278 | 1278 | - |
| VGMIDI |  | 9315 | 8383 | 932 |
| EMOPIA |  | 21480 | 19332 | 2148 |
| Rough4Q |  | 520673 | 468605 | 52068 |
## Mirror
## Evaluation
## Cite
```bibtex
@misc{zhou2025emelodygenemotionconditionedmelodygeneration,
title = {EMelodyGen: Emotion-Conditioned Melody Generation in ABC Notation with the Musical Feature Template},
author = {Monan Zhou and Xiaobing Li and Feng Yu and Wei Li},
year = {2025},
eprint = {2309.13259},
archiveprefix = {arXiv},
primaryclass = {cs.IR},
url = {https://arxiv.org/abs/2309.13259}
}
```
## Reference
-