Back to datasets
Dataset assetOpen Source CommunityModel TrainingInstruction Tuning

FIT-RS

FIT‑RS is a large‑scale fine‑grained instruction‑tuning dataset containing 1,800,851 high‑quality instruction samples, designed to enhance the fine‑grained understanding capabilities of Remote Sensing Large Multimodal Models (RSLMMs).

Source
github
Created
Jun 6, 2024
Updated
Jun 7, 2024
Signals
279 views
Availability
Linked source ready
Overview

Dataset description and usage context

SkySenseGPT Dataset Overview

Dataset Introduction

Main Content

  • FIT‑RS (Remote Sensing Fine‑Grained Instruction Tuning) Dataset: 1,800,851 high‑quality instruction samples aimed at improving fine‑grained comprehension in remote‑sensing multimodal models, especially for understanding semantic relationships between objects in complex remote‑sensing scenes.
  • FIT‑RSRC (Remote Sensing Relation Comprehension) Benchmark: Uses a single‑choice format and CircularEval strategy, includes high‑quality distractor options and unanswerable questions, designed to evaluate LMMs' relational understanding in remote sensing.

Dataset Details

FIT‑RS

  • Description: Large‑scale fine‑grained instruction‑tuning dataset with 1,800,851 high‑quality instruction samples, intended to boost RSLMMs' fine‑grained comprehension abilities.

FIT‑RSRC

  • Description: Benchmark for comprehensive evaluation of LMMs' relational understanding in remote sensing, employing single‑choice questions with four question types, high‑quality distractors, and CircularEval as the evaluation strategy.

Download Links

  • FIT‑RS: Fine‑grained instruction dataset with 1.8 M samples.
  • FIT‑RSRC: Single‑choice benchmark for relational comprehension.
  • SkySenseGPT: Remote‑sensing large multimodal model.

License

  • Released under the Apache 2.0 License.

Citation

  • If this work benefits your research, please star the repository ⭐ and cite our paper:
@article{luo2024sky,
  title={SkySenseGPT: A Fine‑Grained Instruction Tuning Dataset and Model for Remote Sensing Vision‑Language Understanding},
  author={Luo, Junwei and Pang, Zhen and Zhang, Yongjun and Wang, Tingzhu and Wang, Linlin and Dang, Bo and Lao, Jiangwei and Wang, Jian and Chen, Jingdong and Tan, Yihua and Li, Yansheng},
  journal={arXiv preprint arXiv:2406.10100},
  year={2024}
}
Need downstream help?

Pair the dataset with AI analysis and content workflows.

Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.

Explore AI studio