Aaron Dollar, Yale U.
Title: The YCB object benchmark for manipulation research
I will discuss some of our joint efforts at Yale, CMU, and Berkeley towards developing a physical benchmark of objects, and software tools for autonomous robotic manipulation.
Dieter Fox, University of Washington
Title: Experiences With an RGB-D Object Dataset
In this talk, I will present our effort in developing the first dataset for RGB-D based object recognition. I will also discuss lessons learned from this work and how this might apply to grasping and manipulation datasets.
Yasemin Bekiroglu, University of Birmingham
Title: Assessing grasp stability and object shape modeling based on visual and tactile data
I will talk about probabilistic approaches using real sensory data, e.g., visual and tactile, for learning models to assess grasp success (discriminative and generative) and understanding object shape which is important for grasp planning. I will also introduce a low-cost pipeline and database for reproducible manipulation research. Our approach combines an inexpensive generation of detailed 3D object models via monocular camera images with a state of the art object tracking algorithm.
Matteo Bianchi, IIT
Title: An open-access repository to share data and tools for the study of human and robotic hands: the HandCorpus initiative
The HandCorpus, is an open-access repository for sharing data, tools and analyses about human and robotic hands. The HandCorpus website represents a cross-platform and user-friendly portal for researchers interested in sharing datasets and/or exchanging ideas, regarding the most versatile end-effector known, the human hand. Over the last years the HandCorpus community has grown and consists now (September 2015) of five European Committee (EC) projects and more than 20 research groups, located across Europe and United States of America. Finally, the HandCorpus website is cross-platform, cross-browser and fully accessible through all kind of mobile-devices.
Jeannette Bohg, Max Planck Institute
Title: Leveraging Big Data for Grasp Planning
We publicly released a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with the standard epsilon and a new physics-metric. We use a descriptive and efficient representation of the local object shape at which the grasp is applied. Each grasp is annotated with the proposed metrics and representation.
Given this data, we present a two-fold analysis:
(i) We use crowdsourcing to analyze the correlation of the two metrics with grasp success as predicted by humans. The results confirm that the proposed physics metric is a more consistent predictor for grasp success than the epsilon metric. Furthermore it supports the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics metric can be used for simulation data.
(ii) We apply big-data learning techniques (Convolutional Neural Networks and Random Forests) to show how they can leverage the large-scale database for improved prediction of grasp success.
Yu Sun, University of South Florida
Title: Interactive motion and wrench in instrument manipulation
Tamim Asfour, Karlsruher Institut für Technologie
Title: The KIT Whole-Body Human Motion Database
We present a publically released large-scale whole-body human motion database consisting of motion data of the observed human subject as well as the objects with which the subject is interacting. We describe the procedures for a systematic recording of human motion data with associated complementary data like video recordings and additional sensor measurements (force, IMU, …), as well as environmental elements and objects. The availability of accurate object trajectories together with the associated object mesh models makes the data especially useful for the analysis of manipulation, locomotion and loco-manipulation tasks. We present procedures and techniques for motion capturing, annotating and organization in large-scale databases as well as for the normalization of human motion to a unified representation based on a reference model of the human body. In addition, we provide methods and software tools for efficient search in the database as well as the transfer of subject-specific motions to robots with different embodiments and discuss several current applications of the database in our current research on whole-body grasping and loco-manipulation
Marco Gabiccini/H. Marino, University of Pisa
Title: Datasets (and tools) from disconnected markers to organized behaviors: a path towards autonomous manipulation
In this talk, I will discuss the pros and cons in generating large datasets for grasping in experimental settings or in simulated environments, possibly including the use of optimal control methods to substitute the human in the loop.
Ken Goldberg, UC Berkeley
Title: Dexterity Network (Dex-Net): A Cloud-Based Network of 3D Objects for
Robust Grasp Planning
Dexterity Network 1.0 (Dex-Net) is a data-driven approach to robust
robot grasping and manipulation based on a new dataset of 3D object
models that currently includes over 10,000 unique 3D object models and
2.5 million parallel-jaw grasps. Dex-Net includes a Multi-Armed
Bandit algorithm with correlated rewards from prior grasps to estimate
the probability of force closure under sampled uncertainty in object
and gripper pose and friction. Dex-Net 1.0 uses Multi-View
Convolutional Neural Networks (MV-CNNs), a new deep learning method
for 3D object classification, as a similarity metric between objects.
Dex-Net 1.0 runs on the Google Cloud Platform to simultaneously run up
to 1,500 virtual cores, reducing runtime by three orders of magnitude.
Experiments suggest that using prior data can significantly benefit
the quality and complexity of robust grasp planning.