MusicSet
MusicSet is built on the MTG‑Jamendo dataset and focuses on music audio with rich textual descriptions. The dataset selects music tracks that have at least five tags, extracts the middle 80 % of each audio file, and splits it into 10‑second clips while removing non‑melodic sections. The clips are saved as individual WAV files and their descriptive information is stored in JSON files. Textual descriptions are generated via the DeepSeek API, which was trained on the MusicCaps description style and consolidates multiple tags into full sentences. MusicSet ultimately contains about 150,000 10‑second music‑text pairs, integrating elements from MusicBench and MusicCaps.
Dataset description and usage context
MusicSet Dataset
Overview
MusicSet is constructed from the MTG‑Jamendo dataset by selecting and expanding music audio and adding descriptive text. The dataset contains roughly 150 k 10‑second music‑text pairs.
Data Processing
- Audio Selection: Choose music audio with at least five tags.
- Audio Segmentation: Load audio files, extract the middle 80 %, and split into 10‑second segments, discarding non‑melodic parts.
- Tag Expansion: Use the DeepSeek API to expand multiple tags into full descriptive texts.
- Data Integration: Combine the generated music‑text pairs with MusicBench and MusicCaps to form the final MusicSet dataset.
Data Format
- Audio Files: Saved as individual WAV files.
- Description Files: Saved as JSON files.
Citation
@article{wei2024melodyneedmusicgeneration,
title={Melody Is All You Need For Music Generation},
author={Shaopeng Wei and Manzhen Wei and Haoyu Wang and Yu Zhao and Gang Kou},
year={2024},
eprint={2409.20196},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2409.20196},
}
Pair the dataset with AI analysis and content workflows.
Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.