Publication | IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2021

UV-Net

Learning from Boundary Representations

This paper presents a representation and neural network, UV-Net, to learn from Boundary representations (B-rep), the industry-wide standard for solid models in computer-aided design (CAD). This research has the potential to unlock numerous data-driven CAD applications such as auto-complete of modeling operations, smart selection tools, and shape similarity search.

Download publication

Abstract

UV-Net: Learning from Boundary Representations

Pradeep Kumar Jayaraman, Aditya Sanghi, Joseph G. Lambourne, Karl D.D. Willis, Thomas Davies, Hooman Shayani, Nigel Morris

IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2021

We introduce UV-Net, a novel neural network architecture and representation designed to operate directly on Boundary representation (B-rep) data from 3D CAD models. The B-rep format is widely used in the design, simulation and manufacturing industries to enable sophisticated and precise CAD modeling operations. However, B-rep data presents some unique challenges when used with modern machine learning due to the complexity of the data structure and its support for both continuous non-Euclidean geometric entities and discrete topological entities. In this paper, we propose a unified representation for B-rep data that exploits the U and V parameter domain of curves and surfaces to model geometry, and an adjacency graph to explicitly model topology. This leads to a unique and efficient network architecture, UV-Net, that couples image and graph convolutional neural networks in a compute and memory-efficient manner. To aid in future research we present a synthetic labelled B-rep dataset, SolidLetters, derived from human designed fonts with variations in both geometry and topology. Finally we demonstrate that UV-Net can generalize to supervised and unsupervised tasks on five datasets, while outperforming alternate 3D shape representations such as point clouds, voxels, and meshes.

Get in touch

Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.

Contact us