Explore high-quality datasets for your AI and machine learning projects.
DARE (Diverse Visual Question Answering with Robustness Evaluation) is a carefully curated multiple‑choice VQA benchmark. It evaluates visual‑language models across five categories and includes four robustness assessments based on prompt, answer‑option subset, output format, and number of correct answers. The validation split contains images, questions, answer options, and correct answers, while the test split hides correct answers to prevent leakage.