JUHE API Marketplace
DATASET
Open Source Community

UniMed

UniMed is a large‑scale, open‑source multimodal medical dataset created by Mohammed Bin Zayed University of AI and other institutions. It contains over 5.3 million image‑text pairs across six imaging modalities: X‑ray, CT, MRI, Ultrasound, Pathology, and Fundus. The dataset is built by converting modality‑specific classification datasets into image‑text format using large language models, and augmenting them with existing medical image‑text data, enabling scalable pre‑training of visual‑language models (VLMs). UniMed aims to alleviate the scarcity of publicly available large‑scale medical image‑text data and supports tasks such as zero‑shot classification and cross‑modal generalization.

Updated 12/14/2024
arXiv

Description

UniMed‑CLIP: Towards a Unified Image‑Text Pre‑training Paradigm for Diverse Medical Imaging Modalities

Dataset Overview

Dataset Name

UniMed‑CLIP

Dataset Description

UniMed‑CLIP is a unified image‑text pre‑training dataset for multiple medical imaging modalities. It comprises over 5.3 million image‑text pairs covering six modalities: X‑ray, CT, MRI, Ultrasound, Pathology, and Fundus.

Dataset Features

  1. Multimodal Coverage: Includes six distinct medical imaging modalities, providing rich multimodal data.
  2. Large Scale: Over 5.3 million image‑text pairs furnish a solid foundation for training general medical VLMs.
  3. Open‑Source: Accompanied by detailed code and annotation files for dataset preparation, fostering open research in medical VLMs.

Dataset Applications

UniMed‑CLIP primarily supports training and evaluating medical visual‑language models (VLMs), especially excelling in zero‑shot evaluation.

Dataset Contributions

  1. UniMed Dataset: An open‑source large‑scale multimodal medical dataset with over 5.3 million samples across six modalities.
  2. UniMed‑CLIP VLMs: Contrastive learning VLMs trained on UniMed, outperforming existing general VLMs across multiple medical modalities.
  3. Extensive Evaluation: Provides ablation studies on design choices and releases training code, dataset, and model checkpoints to advance medical VLM research.

Dataset Performance

MethodPaper LinkX‑rayRetinal‑FundusCTMRIUSHistopathologyAverage
BioMedCLIPLink55.4322.8743.9964.5949.2054.5049.02
PMC‑CLIPLink52.6425.8466.0663.6862.5153.5653.37
UniMed‑CLIPLink68.7831.2385.5468.8368.6459.9661.63

Dataset Updates

  • 13 December 2024: Released annotation and code scripts for preparing the UniMed pre‑training dataset, along with training and inference code and pretrained checkpoints for UniMed‑CLIP.

Dataset Preparation

Detailed instructions and annotation files for dataset preparation are available in the UniMed‑DATA.md document.

Pre‑trained Models

Three UniMed‑CLIP model weights are provided:

model_nametext encoderpretrained_weightsResolutionGPUsAvg. Score (21 datasets)
ViT‑B‑16‑quickgeluBiomedNLP‑BiomedBERT‑base‑uncased‑abstractunimed_clip_vit_b1622416 × A100 (40 GB)61.63
ViT‑L‑14‑quickgeluBiomedNLP‑BiomedBERT‑large‑uncased‑abstractunimed_clip_vit_l14_large_text_encoder33616 × A100 (40 GB)62.09
ViT‑L‑14‑quickgeluBiomedNLP‑BiomedBERT‑base‑uncased‑abstractunimed_clip_vit_l14_base_text_encoder33616 × A100 (40 GB)64.84

Citation

If you use this dataset, please cite the following paper:

@inproceedings{khattakuniemed,
  title={UniMed‑CLIP: Towards a Unified Image‑Text Pre‑training Paradigm for Diverse Medical Imaging Modalities},
  author={Khattak, Muhammad Uzair and Kunhimon, Shahina and Muzzamal, Naseer and Khan, Salman and Khan, Fahad Shahbaz},
  journal={arXiv:2412.10372},
  year={2024}
}

AI studio

Generate PPTs instantly with Nano Banana Pro.

Generate PPT Now

Access Dataset

Login to Access

Please login to view download links and access full dataset details.

Topics

Medical Imaging
Multimodal Data

Source

Organization: arXiv

Created: 12/14/2024

Power Your Data Analysis with Premium AI Models

Supporting GPT-5, Claude-4, DeepSeek v3, Gemini and more.

Enjoy a free trial and save 20%+ compared to official pricing.