Publication | ICRA Workshop on RL for Contact-Rich Manipulation 2022
Learning Dense Reward with Temporal Variant Self-Supervision
Reinforcement learning (RL) is gaining momentum in solving complex real-world robotics problems. One challenging category is contact-rich manipulation tasks. The success of RL in these scenarios depends on a reliable reward system. While this genre of problems is marked by rich, high dimensional, continuous observations, it is typically hard to come up with a dense reward that can harness such richness to guide RL training. The conventional way of using sparse, boolean rewards (e.g., 1 if the task is successfully completed and 0 otherwise) is often challenging and inefficient. The difficulty has led to the practice of reward engineering, where rewards are hand-crafted based on domain knowledge and trial-and-error. However, such solutions often require extensive robotics expertise and can be task-specific.
In this research, we propose an end-to-end learning framework that can extract dense rewards from multimodal observations. The reward is learned in a self-supervised manner by combining one or two human demonstrations with a physics simulator, and can then be directly used in training RL algorithms. We evaluate our framework in two contact-rich manipulation tasks, joint assembly and door-opening tasks.
There are two main contributions in this paper: 1) a temporal variant forward sampling (TVFS) method that is more robust and cost-efficient in generating samples from human demonstrations for contact-rich manipulation tasks, 2) a self-supervised latent representation learning architecture that can utilize sample pairs from TVFS.
Download publicationAbstract
Learning Dense Reward with Temporal Variant Self-Supervision
Yuning Wu, Jieliang Luo, Hui Li
ICRA Workshop on RL for Contact-Rich Manipulation 2022
Rewards play an essential role in reinforcement learning. In contrast to rule-based game environments with well-defined reward functions, complex real-world robotic applications, such as contact-rich manipulation, lack explicit and informative descriptions that can directly be used as a reward. Previous effort has shown that it is possible to algorithmically extract dense rewards directly from multimodal observations. In this paper, we aim to extend this effort by proposing a more efficient and robust way of sampling and learning. In particular, our sampling approach utilizes temporal variance to simulate the fluctuating state and action distribution of a manipulation task. We then proposed a network architecture for self-supervised learning to better incorporate temporal information in latent representations. We tested our approach in two experimental setups, namely joint-assembly and door opening. Preliminary results show that our approach is effective and efficient in learning dense rewards, and the learned rewards lead to faster convergence than baselines.
Associated Researchers
Jieliang (Rodger) Luo
Sr. Principal AI Research Scientist
Yuning Wu
Carnegie Mellon University
Related Resources
2024
What’s in this LCA Report? A Case Study on Harnessing Large Language Models to Support Designers in Understanding Life Cycle ReportsExploring how large language models like ChatGPT can help designers…
2022
A Discretization-free Metric For Assessing Quality Diversity AlgorithmsA multi-scale generative design model that adapts the Wave Function…
2023
Algorithms for Voxel-based Architectural Space AnalysisThis approach provides a simple and robust way to compute…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us