IEEE International Conference on Robotics and Automation 2023
Safe Self-Supervised Learning in Real of Visuo-Tactile Feedback Policies for Industrial Insertion
Fig. 1: Overview of the learned two-phase insertion policy: the red arrows indicating the robot actions given by the policies. (A) The robot grasps the part at an initial pose. (B) The tactile guided policy tac estimates the grasp pose using the tactile image and aligns the z-axis of the part with the insertion axis. (C) A vision guided policy vis is used to insert the part. (D) The part is inserted successfully into the receptacle.
Abstract
Industrial insertion tasks are often performed repetitively with parts that are subject to tight tolerances and prone to breakage. Learning an industrial insertion policy in real is challenging as the collision between the parts and the environment can cause slippage or breakage of the part. In this paper, we present a safe self-supervised method to learn a visuo-tactile insertion policy that is robust to grasp pose variations. The method reduces human input and collisions between the part and the receptacle. The method divides the insertion task into two phases. In the first align phase, a tactile-based grasp pose estimation model is learned to align the insertion part with the receptacle. In the second insert phase, a vision-based policy is learned to guide the part into the receptacle. The robot uses force-torque sensing to achieve a safe self-supervised data collection pipeline. Physical experiments on the USB insertion task from the NIST Assembly Taskboard suggest that the resulting policies can achieve 45/45 insertion successes on 45 different initial grasp poses, improving on two baselines: (1) a behavior cloning agent trained on 50 human insertion demonstrations (1/45) and (2) an online RL policy (TD3) trained in real (0/45).
Download publicationAssociated Researchers
Letian Fu
UC Berkeley
Huang Huang
UC Berkeley
Lars Berscheid
Karlsruhe Institute of Technology
Ken Goldberg
UC Berkeley
Related Publications
2025
In-Context Imitation Learning via Next-Token PredictionThis robotics approach allows flexible and training-free execution of…
2024
ASAP: Automated Sequence Planning for Complex Robotic Assembly with Physical FeasibilityA physics-based planning approach for automatically generating…
2024
Bridging the Sim-to-Real Gap with Dynamic Compliance Tuning for Industrial InsertionA novel framework for robustly learning manipulation skills…
2023
Constrained-Context Conditional Diffusion Models for Imitation LearningA diffusion model policy for solving 6-DoF robotic manipulation tasks…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us