Multi-Task Domain Adaptation for Deep Learning of Instance Grasping from Simulation

A new paper from Google Brain and X using PyBullet:
earning-based approaches to robotic manipulation are limited by the scalability of data collection and accessibility of labels. In this paper, we present a multi-task domain adaptation framework for instance grasping in cluttered scenes by utilizing simulated robot experiments. Our neural network takes monocular RGB images and the instance segmentation mask of a specified target object as inputs, and predicts the probability of successfully grasping the specified object for each candidate motor command. The proposed transfer learning framework trains a model for instance grasping in simulation and uses a domain-adversarial loss to transfer the trained model to real robots using indiscriminate grasping data, which is available both in simulation and the real world. We evaluate our model in real-world robot experiments, comparing it with alternative model architectures as well as an indiscriminate grasping baseline.

See also https://sites.google.com/corp/view/multi-task-domain-adaptation and
https://arxiv.org/abs/1710.06422

Bullet 2.87 with pybullet robotics Reinforcement Learning environments

Bullet 2.87 has improved support for robotics, reinforcement learning and VR. In particular, see the “Reinforcement Learning” section in the pybullet quickstart guide at http://pybullet.org . There are also preliminary C# bindings to allow the use of pybullet inside Unity 3D for robotics and reinforcement learning. In addition, vectorunit Beach Buggy Racing using Bullet has been released for the Nintendo Switch!

beach-buggy-racing

You can download the release from https://github.com/bulletphysics/bullet3/releases
Here are some videos of some Bullet reinforcement learning environments trained using TensorFlow Agents PPO:


See also KUKA grasping
and pybullet Ant

Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping

A paper using PyBullet:

Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples.

See https://arxiv.org/abs/1709.07857 and https://sites.google.com/corp/view/graspgan

Learning 6-DOF Grasping Interaction via Deep Geometry-aware 3D Representations

A paper using PyBullet:

This paper focuses on the problem of learning 6-DOF grasping with a parallel jaw gripper in simulation. We propose the notion of a geometry-aware representation in grasping based on the assumption that knowledge of 3D geometry is at the heart of interaction. Our key idea is constraining and regularizing grasping interaction learning through 3D geometry prediction. Specifically, we formulate the learning of deep geometry-aware grasping model in two steps: First, we learn to build mental geometry-aware representation by reconstructing the scene (i.e., 3D occupancy grid) from RGBD input via generative 3D shape modeling. Second, we learn to predict grasping outcome with its internal geometry-aware representation. The learned outcome prediction model is used to sequentially propose grasping solutions via analysis-by-synthesis optimization. Our contributions are fourfold: (1) To best of our knowledge, we are presenting for the first time a method to learn a 6-DOF grasping net from RGBD input; (2) We build a grasping dataset from demonstrations in virtual reality with rich sensory and interaction annotations. This dataset includes 101 everyday objects spread across 7 categories, additionally, we propose a data augmentation strategy for effective learning; (3) We demonstrate that the learned geometry-aware representation leads to about 10 percent relative performance improvement over the baseline CNN on grasping objects from our dataset. (4) We further demonstrate that the model generalizes to novel viewpoints and object instances.

https://arxiv.org/abs/1708.07303

OpenAI Roboschool using Bullet Physics

Roboschool provides new OpenAI Gym environments for controlling robots in simulation. Eight of these environments serve as free alternatives to pre-existing MuJoCo implementations, re-tuned to produce more realistic motion. We also include several new, challenging environments.

After we launched Gym, one issue we heard from many users was that the MuJoCo component required a paid license (though MuJoCo recently added free student licenses for personal and class work). Roboschool removes this constraint, letting everyone conduct research regardless of their budget. Roboschool is based on the Bullet Physics Engine, an open-source, permissively licensed physics library that has been used by other simulation software such as Gazebo and V-REP.

See also https://openai.com/blog/roboschool/

Bullet 2.86 with pybullet for robotics, deep learning, VR and haptics

The Bullet 2.86 has improved Python bindings, pybullet, for robotics, machine learning and VR, see the pybullet quickstart guide.

Furthermore, the PGS LCP constraint solver has a new option to terminate as soon as the residual (error) is below a specified tolerance (instead of terminating after a fixed number of iterations). There is preliminary support to load some MuJoCo MJCF xml files (see data/mjcf), and haptic experiments with a VR glove. Get the latest release from github here.App_SharedMemoryPhysics_VR_vs20 2017-01-26 10-12-45-16

[youtube 0JC-yukK-jo].C2guE9TUcAAeMOw

 

 

Bullet 2.85 released : pybullet and Virtual Reality support for HTC Vive and Oculus Rift

bullet_pybullet_vrWe have been making a lot of progress in higher quality physics simulation for robotics, games and visual effects. To make our physics simulation easier to use, especially for roboticist and machine learning experts, we created Python bindings, see examples/pybullet. In addition, we added Virtual Reality support for HTC Vive and Oculus Rift using the openvr sdk. See attached youtube movie. Updated documentation will be added soon, as well as possible show-stopper bug-fixes, so the actual release tag may bump up to 2.85.x. Download the release from github here.

[youtube VMJyZtHQL50].

Bullet 2.83 released and upcoming SIGGRAPH 2015 course

bullet2.83

The new Bullet Physics SDK 2.83 is available from github. The biggest change is the new example browser using OpenGL 3+. For more changes and features, see the docs/BulletQuickstart.pdf as part of the release. For more information and download link, see http://www.bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=18&t=10527

Also, our proposal for a course on Bullet got accepted for the upcoming SIGGRAPH 2015 conference in Los Angeles.

Tuesday, 11 August 3:45 pm – 5:15 pm, Los Angeles Convention Center, Room 404AB

UPDATE: here are the slide decks:

3:45-4.15 pm
Introduction to rigid body pipeline, collision detection

4:15-4:45 pm
Advances in constraint solving, Featherstone Articulated Body Algorithm

4:45-5.15 pm
Acceleration of the full collision detection and constraint solver on GPU

Scientific and Technical Academy Award for the development of Bullet Physics!

The Academy of Motion Picture Arts and Sciences today announced that 21 scientific and technical achievements represented by 58 individual award recipients will be honored at its annual Scientific and Technical Awards Presentation on Saturday, February 7, at the Beverly Wilshire in Beverly Hills.

“To Erwin Coumans for the development of the Bullet physics library, and to Nafees Bin Zafar and Stephen Marshall for the separate development of two large-scale destruction simulation systems based on Bullet.

These pioneering systems demonstrated that large numbers of constrained rigid bodies could be used to animate visually complex, believable destruction effects with minimal simulation time.”

Thanks to all Bullet contributors and users!
See https://www.oscars.org/news/21-scientific-and-technical-achievements-be-honored-academy-awardsr

Bullet used in NASA Tensegrity Robotics Toolkit, book Multithreading for Visual Effects

multithreading

nasa_bullet

Nasa is using Bullet in their new open source Tensegrity Robotics Toolkit. You can find more information and video link here: http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=17&t=9978

The new book Multithreading for Visual Effects includes a chapter on the OpenCL optimizations for upcoming Bullet 3.x. Other chapters include multithreading development experiences from OpenSubDiv, Houdini, Pixar Presto and Dreamworks Fluids and LibEE. You can get it at the publisher AK Peters/CRC Press or at Amazon.

Development on upcoming Bullet 2.83 and Bullet 3.x is making good progress, hopefully an update follows soon.

Home of Bullet and PyBullet: physics simulation for games, visual effects, robotics and reinforcement learning.