Publication
Wavelet Latent Diffusion (WaLa)
Billion-Parameter 3D Generative Model with Compact Wavelet Encodings
Abstract
Large-scale 3D generative models require substantial computational resources yet often fall short in capturing fine details and complex geometries at high resolutions. We attribute this limitation to the inefficiency of current representations, which lack the compactness required to model the generative models effectively. To address this, we introduce a novel approach called Wavelet Latent Diffusion, or WaLa, that encodes 3D shapes into a wavelet-based, compact latent encodings. Specifically, we compress a 2563 signed distance field into a 123 × 4 latent grid, achieving an impressive 2,427× compression ratio with minimal loss of detail. This high level of compression allows our method to efficiently train large-scale generative networks without increasing the inference time. Our models, both conditional and unconditional, contain approximately one billion parameters and successfully generate high-quality 3D shapes at 2563 resolution. Moreover, WaLa offers rapid inference, producing shapes within two to four seconds depending on the condition, despite the model’s scale. We demonstrate state-of-the-art performance across multiple datasets, with significant improvements in generation quality, diversity, and computational efficiency. We open-source our code and, to the best of our knowledge, release the largest pretrained 3D generative models across different modalities.
Download publicationAT RIGHT: Overview of the WaLa network architecture and 2-stage training process and inference method. Top Left: Stage 1 autoencoder training, compressing Wavelet Tree (W) shape representation into a compact latent space. Right: Conditional/unconditional diffusion training. Bottom: Inference pipeline, illustrating sampling from the trained diffusion model and decoding the sampled latent into a Wavelet Tree (W), then into a mesh.
Authors
Related Publications
2024
Make-A-Shape: a Ten-Million-scale 3D Shape ModelTrained on 10 million 3D shapes, our model exhibits the capability to…
2023
Learned Visual Features to Textual ExplanationsA novel method that leverages the capabilities of large language…
2023
Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape GenerationGenerative model that can synthesize consistent 3D shapes from a…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us