Publication

Wavelet Latent Diffusion (WaLa)

Billion-Parameter 3D Generative Model with Compact Wavelet Encodings

ABOVE: We propose a new 3D generative model, called WaLa, that can generate shapes from conditions such as sketches, text, single-view images, low-resolution voxels, point clouds & depthmaps.

Abstract

Large-scale 3D generative models require substantial computational resources yet often fall short in capturing fine details and complex geometries at high resolutions. We attribute this limitation to the inefficiency of current representations, which lack the compactness required to model the generative models effectively. To address this, we introduce a novel approach called Wavelet Latent Diffusion, or WaLa, that encodes 3D shapes into a wavelet-based, compact latent encodings. Specifically, we compress a 2563 signed distance field into a 123 × 4 latent grid, achieving an impressive 2,427× compression ratio with minimal loss of detail. This high level of compression allows our method to efficiently train large-scale generative networks without increasing the inference time. Our models, both conditional and unconditional, contain approximately one billion parameters and successfully generate high-quality 3D shapes at 2563 resolution. Moreover, WaLa offers rapid inference, producing shapes within two to four seconds depending on the condition, despite the model’s scale. We demonstrate state-of-the-art performance across multiple datasets, with significant improvements in generation quality, diversity, and computational efficiency. We open-source our code and, to the best of our knowledge, release the largest pretrained 3D generative models across different modalities.

Download publication

AT RIGHT: Overview of the WaLa network architecture and 2-stage training process and inference method. Top Left: Stage 1 autoencoder training, compressing Wavelet Tree (W) shape representation into a compact latent space. Right: Conditional/unconditional diffusion training. Bottom: Inference pipeline, illustrating sampling from the trained diffusion model and decoding the sampled latent into a Wavelet Tree (W), then into a mesh.

Related Publications

Publication

2024

Make-A-Shape: a Ten-Million-scale 3D Shape Model

Trained on 10 million 3D shapes, our model exhibits the capability to…

Publication

2024

SLiMe: Segment Like Me

We explore leveraging extensive vision-language models for segmenting…

Publication

2023

Learned Visual Features to Textual Explanations

A novel method that leverages the capabilities of large language…

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us