OCID-Grasp
The OCID Grasp dataset was created by the Institute of Computer Graphics and Vision at Graz University of Technology, Austria. It extends the original OCID dataset with 1,763 RGB‑D images, over 11.4 k object segmentation masks, and more than 75 k manually annotated grasp candidates. Each object is assigned to one of 31 categories. The dataset supports research on robot grasp detection in complex scenes by combining semantic segmentation with grasp detection.
Dataset description and usage context
Dataset Overview
Dataset Name
End‑to‑End Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB
Description
This repository contains code for an end‑to‑end trainable deep neural network for robotic grasp detection and semantic segmentation, using the OCID‑grasp dataset for training and testing.
Paper
- Title: End‑to‑End Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB
- Authors: Stefan Ainetter, Friedrich Fraundorfer
- Conference: IEEE International Conference on Robotics and Automation (ICRA)
- Pages: 13452‑13458
- Year: 2021
Requirements
- CUDA 10.1
- Linux with GCC 7 or 8
- PyTorch 1.1.0
Data Access
Dataset Description
OCID_grasp extends OCID with 1,763 selected RGB‑D images, >11.4 k object masks, and >75 k hand‑labeled grasp candidates. Objects are classified into 31 categories.
Related Work
If you use OCID_grasp, please also cite the original OCID dataset.
Latest Research
A method combining grasp detection and category‑agnostic instance segmentation was published at BMVC21; details can be found here.
Pair the dataset with AI analysis and content workflows.
Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.