Publication 2023

Constrained-Context Conditional Diffusion Models for Imitation Learning

Fig. 1: Illustration showing action prediction for table-top manipulation using our conditional diffusion model (C3DM), which learns to fixate on relevant parts of the input and iteratively refine its prediction using more details about the observation.

Abstract

Offline Imitation Learning (IL) is a powerful paradigm to learn visuomotor skills, especially for high-precision manipulation tasks. However, IL methods are prone to spurious correlation—expressive models may focus on distractors that are irrelevant to action prediction—and are thus fragile in real-world deployment. Prior methods have addressed this challenge by exploring different model architectures and action representations. However, none were able to balance between sample efficiency, robustness against distractors, and solving high-precision manipulation tasks with complex action space. To this end, we present Constrained-Context Conditional Diffusion 1 2 3 4 6-DoF actions Fixation point C3DM 5 Camera 4 3 2 1 Model (C3DM), a diffusion model policy for solving 6-DoF robotic manipulation tasks with high precision and ability to ignore distractions. A key component of C3DM is a fixation step that helps the action denoiser to focus on task-relevant regions around the predicted action while ignoring distractors in the context. We empirically show that C3DM is able to consistently achieve high success rate on a wide array of tasks, ranging from table top manipulation to industrial kitting, that require varying levels of precision and robustness to distractors.

Download publication

Associated Researchers

Vaibhav Saxena

Georgia Institute of Technology

Danfei Xu

Georgia Institute of Technology

View all researchers

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us