Publication | IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2022
CLIP-Forge
Towards Zero-Shot Text-to-Shape Generation
We propose a zero-shot text-to-shape generation method named CLIP-Forge. Without training on any shape-text pairing labels, our method generates meaningful shapes that correctly reflect the common name, (sub-)category, and semantic attribute information.
Our approach is among the pioneering techniques that can convert text to 3D shapes without the need for costly inference time optimization. Furthermore, it enables the production of multiple shapes for a given text.
This paper was presented at the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)
The dataset for this paper is available at Autodesk AI Lab on Github.
Download publicationAbstract
CLIP-Forge: Towards Zero-Shot Text-to-Shape Generation
Aditya Sanghi, Hang Chu, Joseph G. Lambourne, Ye Wang, Chin-Yi Cheng, Marco Fumero, Kamal Rahimi Malekshan
IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2022
Generating shapes using natural language can enable new ways of imagining and creating the things around us. While significant recent progress has been made in text-to-image generation, text-to-shape generation remains a challenging problem due to the unavailability of paired text and shape data at a large scale. We present a simple yet effective method for zero-shot text-to-shape generation that circumvents such data scarcity. Our proposed method, named CLIP-Forge, is based on a two-stage training process, which only depends on an unlabelled shape dataset and a pre-trained image-text network such as CLIP. Our method has the benefits of avoiding expensive inference time optimization, as well as the ability to generate multiple shapes for a given text. We not only demonstrate promising zero-shot generalization of the CLIP-Forge model qualitatively and quantitatively, but also provide extensive comparative evaluations to better understand its behavior.
Related Resources
2023
Hypothesis Search: Inductive Reasoning with Language ModelsWe propose to improve the inductive reasoning ability of LLMs by…
2024
Experiential Views: Towards Human Experience Evaluation of Designed Spaces using Vision-Language ModelsExploratory research on helping designers and architects anticipate…
2022
Neural Implicit Style-Net: synthesizing shapes in a preferred style exploiting self supervisionWe introduce a novel approach to disentangle style from content in the…
2019
Relational Graph Representation Learning for Open-Domain Question AnsweringWe introduce a relational graph neural network with bi-directional…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us