Explore high-quality datasets for your AI and machine learning projects.
MC‑Bench is a multi‑context visual grounding benchmark created by the Cognitive Computing and Learning Lab at Zhejiang University. It evaluates multimodal large language models (MLLMs) on visual grounding tasks across multiple images. The benchmark comprises 2,000 high‑quality, manually annotated samples covering diverse domains and subjects. Each sample includes a pair of images, instance‑level annotations, and a textual prompt in one of three styles: referring, comparison, or reasoning. The dataset was built by gathering varied images from multiple sources and meticulously annotating them. MC‑Bench targets instance‑level visual grounding in multi‑image scenarios, aiming to address the limitations of current MLLMs in handling complex textual descriptions and cross‑image context understanding.