Publication | International Conference on Machine Learning 2020
Contrastive Multi-View Representation Learning on Graphs
Autodesk deals with complex and mostly non-Euclidean data structures such as 3D models in Fusion, Revit, etc. This data usually does not carry any human annotations. Graphs are powerful data structures that can represent non-Euclidean data and also can carry other types of information such as semantic, visual, and geometric information in addition to the structure. Hence, one can treat graphs as a universal representation for Autodesk related data. With this assumption, this project aims to learn representations (i.e., dense low-dimensional vectors) of such graphs. Once this is done, one can for example compare the similarity between Revit or Fusion models, or can use only a few labels to categorize such models. This research can directly impact Future of AEC and Industry Initiatives and also can be utilized to enhance research on other initiatives such as robotics and HCI research.
Download publicationAbstract
Contrastive Multi-View Representation Learning on Graphs
Kaveh Hassani, Amir Hosein Khasahmadi
International Conference on Machine Learning 2020
We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks.
Related Resources
2023
Generating Pragmatic Examples to Train Neural Program SynthesizersUsing neural networks is a novel way to amortize a synthesizer’s…
2021
Neural UpFlow: A Scene Flow Learning Approach to Increase the Apparent Resolution of Particle-Based LiquidsIn this research, we introduce a data-driven approach to increase the…
2021
A Learning Approach to Robot-Agnostic Force-Guided High Precision AssemblyIn this work we propose a learning approach to high-precision robotic…
2020
Memory-Based Graph NetworksGraph neural networks (GNNs) are a class of deep models that operate…
Get in touch
Something pique your interest? Get in touch if you’d like to learn more about Autodesk Research, our projects, people, and potential collaboration opportunities.
Contact us