The object models can be downloaded via http://ycb-benchmarks.s3-website-us-east-1.amazonaws.com/.
For each object, we provide:
- 600 RGBD images,
- 600 high-resolution RGB images,
- Segmentation masks for each image,
- Calibration information for each image,
- Texture-mapped 3D mesh models.
In order to ease adoption across various manipulation research approaches, we collected visual data that are commonly required for grasping algorithms and generate 3D models for use in simulation. We used the scanning rig used to collect the BigBIRD dataset. The rig has 5 RGBD sensors and 5 high-resolution RGB cameras arranged in a quarter-circular arc. Each object was placed on a computer-controlled turntable, which was rotated by 3 degrees at a time, yielding 120 turntable orientations. For more information, please refer to [1], [2].
[1] Benchmarking in Manipulation Research: The YCB Object and Model Set and Benchmarking Protocols, IEEE Robotics and Automation Magazine, pp. 36 – 52, Sept. 2015.
[2] Berk Calli, Arjun Singh, James Bruce, Aaron Walsman, Kurt Konolige, Siddhartha Srinivasa, Pieter Abbeel, Aaron M Dollar, Yale-CMU-Berkeley dataset for robotic manipulation research, The International Journal of Robotics Research, vol. 36, Issue 3, pp. 261 – 268, April 2017.
[3] Arjun Singh, James Sha, Karthik S. Narayan, Tudor Achim, and Pieter Abbeel, BigBIRD: A Large-Scale 3D Database of Object Instances, in International Conference on Robotics and Automation, 2014.