JUHE API Marketplace
High Quality Data

Dataset Hub

Explore high-quality datasets for your AI and machine learning projects.

Sort:

Browse by Category

MC-Bench

Visual Grounding
Multimodal Language Models

MC‑Bench is a multi‑context visual grounding benchmark created by the Cognitive Computing and Learning Lab at Zhejiang University. It evaluates multimodal large language models (MLLMs) on visual grounding tasks across multiple images. The benchmark comprises 2,000 high‑quality, manually annotated samples covering diverse domains and subjects. Each sample includes a pair of images, instance‑level annotations, and a textual prompt in one of three styles: referring, comparison, or reasoning. The dataset was built by gathering varied images from multiple sources and meticulously annotating them. MC‑Bench targets instance‑level visual grounding in multi‑image scenarios, aiming to address the limitations of current MLLMs in handling complex textual descriptions and cross‑image context understanding.

arXiv
View Details

JailbreakV-28K/JailBreakV-28k

Multimodal Language Models
Security Evaluation

JailBreakV_28K is a benchmark dataset for evaluating the robustness of multimodal large language models (MLLMs) against jailbreak attacks. It contains 28,000 jailbreak text‑image pairs, including 20,000 text‑based LLM transfer jailbreak attacks and 8,000 image‑based MLLM jailbreak attacks, covering 16 security policies and 5 different jailbreak methods. Additionally, the RedTeam_2K dataset provides 2,000 harmful queries to identify alignment vulnerabilities in LLMs and MLLMs, encompassing 16 security policies and 8 data sources.

hugging_face
View Details