Publication | IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2022
CLIP-Forge
Towards Zero-Shot Text-to-Shape Generation
Our approach is among the pioneering techniques that can convert text to 3D shapes without the need for costly inference time optimization. Furthermore, it enables the production of multiple shapes for a given text.
This paper was presented at the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)
The dataset for this paper is available at Autodesk AI Lab on Github.
Download publicationAbstract
CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation
Aditya Sanghi, Hang Chu, Joseph G. Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, Kamal Rahimi Malekshan
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2022
Generating shapes using natural language can enable new ways of imagining and creating the things around us. While significant recent progress has been made in text-to-image generation, text-to-shape generation remains a challenging problem due to the unavailability of paired text and shape data at a large scale. We present a simple yet effective method for zero-shot text-to-shape generation that circumvents such data scarcity. Our proposed method, named CLIP-Forge, is based on a two-stage training process, which only depends on an unlabelled shape dataset and a pre-trained image-text network such as CLIP. Our method has the benefits of avoiding expensive inference time optimization, as well as the ability to generate multiple shapes for a given text. We not only demonstrate promising zero-shot generalization of the CLIP-Forge model qualitatively and quantitatively, but also provide extensive comparative evaluations to better understand its behavior.
Related Resources
2024
Elicitron: An LLM Agent-Based Simulation Framework for Design Requirements ElicitationA novel framework that leverages Large Language Models (LLMs) to…
2021
Neural UpFlow: A Scene Flow Learning Approach to Increase the Apparent Resolution of Particle-Based LiquidsIn this research, we introduce a data-driven approach to increase the…
2023
SolidGen: An Autoregressive Model for Direct B-rep SynthesisA generative model that can synthesize 3D CAD models in the boundary…
2022
CAPRI-Net: Learning Compact CAD Shapes with Adaptive Primitive AssemblyWe introduce CAPRI-Net, a self-supervised neural net-work for learning…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us