OCID-Grasp
The OCID Grasp dataset was created by the Institute of Computer Graphics and Vision at Graz University of Technology, Austria. It extends the original OCID dataset with 1,763 RGB‑D images, over 11.4 k object segmentation masks, and more than 75 k manually annotated grasp candidates. Each object is assigned to one of 31 categories. The dataset supports research on robot grasp detection in complex scenes by combining semantic segmentation with grasp detection.
Description
Dataset Overview
Dataset Name
End‑to‑End Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB
Description
This repository contains code for an end‑to‑end trainable deep neural network for robotic grasp detection and semantic segmentation, using the OCID‑grasp dataset for training and testing.
Paper
- Title: End‑to‑End Trainable Deep Neural Network for Robotic Grasp Detection and Semantic Segmentation from RGB
- Authors: Stefan Ainetter, Friedrich Fraundorfer
- Conference: IEEE International Conference on Robotics and Automation (ICRA)
- Pages: 13452‑13458
- Year: 2021
Requirements
- CUDA 10.1
- Linux with GCC 7 or 8
- PyTorch 1.1.0
Data Access
Dataset Description
OCID_grasp extends OCID with 1,763 selected RGB‑D images, >11.4 k object masks, and >75 k hand‑labeled grasp candidates. Objects are classified into 31 categories.
Related Work
If you use OCID_grasp, please also cite the original OCID dataset.
Latest Research
A method combining grasp detection and category‑agnostic instance segmentation was published at BMVC21; details can be found here.
AI studio
Generate PPTs instantly with Nano Banana Pro.
Generate PPT NowAccess Dataset
Please login to view download links and access full dataset details.
Topics
Source
Organization: github
Created: 2/11/2022
Power Your Data Analysis with Premium AI Models
Supporting GPT-5, Claude-4, DeepSeek v3, Gemini and more.
Enjoy a free trial and save 20%+ compared to official pricing.