Publication | IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2022

UNIST

Unpaired Neural Implicit Shape Translation Network

This work enables a deep learning model to learn the meaning of style and content from unpaired datasets of 3D and 2D shapes, from two domains, and allows translation (e.g., style transfer) of shapes from one domain to another domain.

Download publication

Abstract

UNIST: Unpaired Neural Implicit Shape Translation Network

Qimin Chen, Johannes Merz, Aditya Sanghi, Hooman Shayani, Ali Mahdavi-Amiri, Hao Zhang

IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2022

We introduce UNIST, the first deep neural implicit mode for general-purpose, unpaired shape-to-shape translation, in both 2D and 3D domains. Our model is built on auto-encoding implicit fields, rather than point clouds which represents the state of the art. Furthermore, our translation network is trained to perform the task over a latent grid representation which combines the merits of both latent-space processing and position awareness, to not only enable drastic shape transforms but also well preserve spatial features and fine local details for natural shape translations. With the same network architecture and only dictated by the in-put domain pairs, our model can learn both style-preserving content alteration and content-preserving style transfer. We demonstrate the generality and quality of the translation results, and compare them to well-known baselines. Code is available at https://qiminchen.github.io/unist/.

Associated Researchers

Qimin Chen

Simon Fraser University

Johannes Merz

Simon Fraser University

Ali Mahdavi-Amiri

School of Computing Science, Simon Fraser University

Hao Zhang

Simon Fraser University

View all researchers

Related Resources

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us